Home » Python » Which is faster in Python: x**.5 or math.sqrt(x)?

Which is faster in Python: x**.5 or math.sqrt(x)?

Posted by: admin November 1, 2017 Leave a comment

Questions:

I’ve been wondering this for some time. As the title say, which is faster, the actual function or simply raising to the half power?

UPDATE

This is not a matter of premature optimization. This is simply a question of how the underlying code actually works. What is the theory of how Python code works?

I sent Guido van Rossum an email cause I really wanted to know the differences in these methods.

My email:

There are at least 3 ways to do a square root in Python: math.sqrt, the
‘**’ operator and pow(x,.5). I’m just curious as to the differences in
the implementation of each of these. When it comes to efficiency which
is better?

His response:

pow and ** are equivalent; math.sqrt doesn’t work for complex numbers,
and links to the C sqrt() function. As to which one is
faster, I have no idea…

Answers:

As per comments, I’ve updated the code:

import time
import math

def timeit1():
    s = time.time()
    for i in xrange(750000):
        z=i**.5
    print "Took %f seconds" % (time.time() - s)

def timeit2(arg=math.sqrt):
    s = time.time()
    for i in xrange(750000):
        z=arg(i)
    print "Took %f seconds" % (time.time() - s)

timeit1()
timeit2()

Now the math.sqrt function is directly in a local argument, meaning it has the fastest lookup possible.

UPDATE: The python version seems to matter here. I used to think that timeit1 would be faster, since when python parses “i**.5” it knows, syntactically, which method to call (__pow__ or some variant), so it doesn’t have to go through the overhead of lookup that the math.sqrt variant does. But I might be wrong:

Python 2.5: 0.191000 vs. 0.224000

Python 2.6: 0.195000 vs. 0.139000

Also psyco seems to deal with math.sqrt better:

Python 2.5 + Psyco 2.0: 0.109000 vs. 0.043000

Python 2.6 + Psyco 2.0: 0.128000 vs. 0.067000


| Interpreter    |  x**.5, |   sqrt, | sqrt faster, % |
|                | seconds | seconds |                |
|----------------+---------+---------+----------------|
| Python 3.2rc1+ |    0.32 |    0.27 |             19 |
| Python 3.1.2   |   0.136 |   0.088 |             55 |
| Python 3.0.1   |   0.155 |   0.102 |             52 |
| Python 2.7     |   0.132 |   0.079 |             67 |
| Python 2.6.6   |   0.121 |   0.075 |             61 |
| PyPy 1.4.1     |   0.083 |  0.0159 |            422 |
| Jython 2.5.1   |   0.132 |    0.22 |            -40 |
| Python 2.5.5   |   0.129 |   0.125 |              3 |
| Python 2.4.6   |   0.131 |   0.123 |              7 |
#+TBLFM: $4=100*($2-$3)/$3;%.0f

Table results produced on machine:

$ uname -vms
Linux #42-Ubuntu SMP Thu Dec 2 02:41:37 UTC 2010 x86_64
$ cat /proc/cpuinfo | grep 'model name' | head -1
model name      : Intel(R) Core(TM) i7 CPU         920  @ 2.67GHz

To reproduce results:

Questions:
Answers:

How many square roots are you really performing? Are you trying to write some 3D graphics engine in Python? If not, then why go with code which is cryptic over code that is easy to read? The time difference is would be less than anybody could notice in just about any application I could forsee. I really don’t mean to put down your question, but it seems that you’re going a little too far with premature optimization.

Questions:
Answers:
  • first rule of optimization: don’t do it
  • second rule: don’t do it, yet

Here’s some timings (Python 2.5.2, Windows):

$ python -mtimeit -s"from math import sqrt; x = 123" "x**.5"
1000000 loops, best of 3: 0.445 usec per loop

$ python -mtimeit -s"from math import sqrt; x = 123" "sqrt(x)"
1000000 loops, best of 3: 0.574 usec per loop

$ python -mtimeit -s"import math; x = 123" "math.sqrt(x)"
1000000 loops, best of 3: 0.727 usec per loop

This test shows that x**.5 is slightly faster than sqrt(x).

For the Python 3.0 the result is the opposite:

$ \Python30\python -mtimeit -s"from math import sqrt; x = 123" "x**.5"
1000000 loops, best of 3: 0.803 usec per loop

$ \Python30\python -mtimeit -s"from math import sqrt; x = 123" "sqrt(x)"
1000000 loops, best of 3: 0.695 usec per loop

$ \Python30\python -mtimeit -s"import math; x = 123" "math.sqrt(x)"
1000000 loops, best of 3: 0.761 usec per loop

math.sqrt(x) is always faster than x**.5 on another machine (Ubuntu, Python 2.6 and 3.1):

$ python -mtimeit -s"from math import sqrt; x = 123" "x**.5"
10000000 loops, best of 3: 0.173 usec per loop
$ python -mtimeit -s"from math import sqrt; x = 123" "sqrt(x)"
10000000 loops, best of 3: 0.115 usec per loop
$ python -mtimeit -s"import math; x = 123" "math.sqrt(x)"
10000000 loops, best of 3: 0.158 usec per loop
$ python3.1 -mtimeit -s"from math import sqrt; x = 123" "x**.5"
10000000 loops, best of 3: 0.194 usec per loop
$ python3.1 -mtimeit -s"from math import sqrt; x = 123" "sqrt(x)"
10000000 loops, best of 3: 0.123 usec per loop
$ python3.1 -mtimeit -s"import math; x = 123" "math.sqrt(x)"
10000000 loops, best of 3: 0.157 usec per loop

Questions:
Answers:

In these micro-benchmarks, math.sqrt will be slower, because of the slight time it takes to lookup the sqrt in the math namespace. You can improve it slightly with

 from math import sqrt

Even then though, running a few variations through timeit, show a slight (4-5%) performance advantage for “x**.5”

interestingly, doing

 import math
 sqrt = math.sqrt

sped it up even more, to within 1% difference in speed, with very little statistical significance.

I will repeat Kibbee, and say that this is probably a premature optimization.

Questions:
Answers:

Most likely math.sqrt(x), because it’s optimized for square rooting.

Benchmarks will provide you the answer you are looking for.

Questions:
Answers:

using Claudiu’s code, on my machine even with “from math import sqrt” x**.5 is faster but using psyco.full() sqrt(x) becomes much faster, at least by 200%

Questions:
Answers:

In python 2.6 the (float).__pow__() function uses the C pow() function and the math.sqrt() functions uses the C sqrt() function.

In glibc compiler the implementation of pow(x,y) is quite complex and it is well optimized for various exceptional cases. For example, calling C pow(x,0.5) simply calls the sqrt() function.

The difference in speed of using .** or math.sqrt is caused by the wrappers used around the C functions and the speed strongly depends on optimization flags/C compiler used on the system.

Edit:

Here are the results of Claudiu’s algorithm on my machine. I got different results:

[email protected]:~$ python2.4 p.py 
Took 0.173994 seconds
Took 0.158991 seconds
[email protected]:~$ python2.5 p.py 
Took 0.182321 seconds
Took 0.155394 seconds
[email protected]:~$ python2.6 p.py 
Took 0.166766 seconds
Took 0.097018 seconds

Questions:
Answers:

For what it’s worth (see Jim’s answer). On my machine, running python 2.5:

PS C:\> python -m timeit -n 100000 10000**.5
100000 loops, best of 3: 0.0543 usec per loop
PS C:\> python -m timeit -n 100000 -s "import math" math.sqrt(10000)
100000 loops, best of 3: 0.162 usec per loop
PS C:\> python -m timeit -n 100000 -s "from math import sqrt" sqrt(10000)
100000 loops, best of 3: 0.0541 usec per loop

Questions:
Answers:

Someone commented about the “fast Newton-Raphson square root” from Quake 3… I implemented it with ctypes, but it’s super slow in comparison to the native versions. I’m going to try a few optimizations and alternate implementations.

from ctypes import c_float, c_long, byref, POINTER, cast

def sqrt(num):
 xhalf = 0.5*num
 x = c_float(num)
 i = cast(byref(x), POINTER(c_long)).contents.value
 i = c_long(0x5f375a86 - (i>>1))
 x = cast(byref(i), POINTER(c_float)).contents.value

 x = x*(1.5-xhalf*x*x)
 x = x*(1.5-xhalf*x*x)
 return x * num

Here’s another method using struct, comes out about 3.6x faster than the ctypes version, but still 1/10 the speed of C.

from struct import pack, unpack

def sqrt_struct(num):
 xhalf = 0.5*num
 i = unpack('L', pack('f', 28.0))[0]
 i = 0x5f375a86 - (i>>1)
 x = unpack('f', pack('L', i))[0]

 x = x*(1.5-xhalf*x*x)
 x = x*(1.5-xhalf*x*x)
 return x * num

Questions:
Answers:

You might want to benchmark the fast Newton-Raphson square root as well. Shouldn’t take much to convert to Python.

Questions:
Answers:

Claudiu’s results differ from mine. I’m using Python 2.6 on Ubuntu on an old P4 2.4Ghz machine… Here’s my results:

>>> timeit1()
Took 0.564911 seconds
>>> timeit2()
Took 0.403087 seconds
>>> timeit1()
Took 0.604713 seconds
>>> timeit2()
Took 0.387749 seconds
>>> timeit1()
Took 0.587829 seconds
>>> timeit2()
Took 0.379381 seconds

sqrt is consistently faster for me… Even Codepad.org NOW seems to agree that sqrt, in the local context, is faster (http://codepad.org/6trzcM3j). Codepad seems to be running Python 2.5 presently. Perhaps they were using 2.4 or older when Claudiu first answered?

In fact, even using math.sqrt(i) in place of arg(i), I still get better times for sqrt. In this case timeit2() took between 0.53 and 0.55 seconds on my machine, which is still better than the 0.56-0.60 figures from timeit1.

I’d say, on modern Python, use math.sqrt and definitely bring it to local context, either with somevar=math.sqrt or with from math import sqrt.

Questions:
Answers:

What would be even faster is if you went into math.py and copied the function “sqrt” into your program. It takes time for your program to find math.py, then open it, find the function you are looking for, and then bring that back to your program. If that function is faster even with the “lookup” steps, then the function itself has to be awfully fast. Probably will cut your time in half. IN summary:

  1. Go to math.py
  2. Find the function “sqrt”
  3. Copy it
  4. Paste function into your program as the sqrt finder.
  5. Time it.