Why is x**4.0 faster than x**4 in Python 3?
Solution 1
Why is
x**4.0
faster thanx**4
in Python 3*?
Python 3 int
objects are a full fledged object designed to support an arbitrary size; due to that fact, they are handled as such on the C level (see how all variables are declared as PyLongObject *
type in long_pow
). This also makes their exponentiation a lot more trickier and tedious since you need to play around with the ob_digit
array it uses to represent its value to perform it. (Source for the brave. -- See: Understanding memory allocation for large integers in Python for more on PyLongObject
s.)
Python float
objects, on the contrary, can be transformed to a C double
type (by using PyFloat_AsDouble
) and operations can be performed using those native types. This is great because, after checking for relevant edge-cases, it allows Python to use the platforms' pow
(C's pow
, that is) to handle the actual exponentiation:
/* Now iv and iw are finite, iw is nonzero, and iv is
* positive and not equal to 1.0. We finally allow
* the platform pow to step in and do the rest.
*/
errno = 0;
PyFPE_START_PROTECT("pow", return NULL)
ix = pow(iv, iw);
where iv
and iw
are our original PyFloatObject
s as C double
s.
For what it's worth: Python
2.7.13
for me is a factor2~3
faster, and shows the inverse behaviour.
The previous fact also explains the discrepancy between Python 2 and 3 so, I thought I'd address this comment too because it is interesting.
In Python 2, you're using the old int
object that differs from the int
object in Python 3 (all int
objects in 3.x are of PyLongObject
type). In Python 2, there's a distinction that depends on the value of the object (or, if you use the suffix L/l
):
# Python 2
type(30) # <type 'int'>
type(30L) # <type 'long'>
The <type 'int'>
you see here does the same thing float
s do, it gets safely converted into a C long
when exponentiation is performed on it (The int_pow
also hints the compiler to put 'em in a register if it can do so, so that could make a difference):
static PyObject *
int_pow(PyIntObject *v, PyIntObject *w, PyIntObject *z)
{
register long iv, iw, iz=0, ix, temp, prev;
/* Snipped for brevity */
this allows for a good speed gain.
To see how sluggish <type 'long'>
s are in comparison to <type 'int'>
s, if you wrapped the x
name in a long
call in Python 2 (essentially forcing it to use long_pow
as in Python 3), the speed gain disappears:
# <type 'int'>
(python2) ➜ python -m timeit "for x in range(1000):" " x**2"
10000 loops, best of 3: 116 usec per loop
# <type 'long'>
(python2) ➜ python -m timeit "for x in range(1000):" " long(x)**2"
100 loops, best of 3: 2.12 msec per loop
Take note that, though the one snippet transforms the int
to long
while the other does not (as pointed out by @pydsinger), this cast is not the contributing force behind the slowdown. The implementation of long_pow
is. (Time the statements solely with long(x)
to see).
[...] it doesn't happen outside of the loop. [...] Any idea about that?
This is CPython's peephole optimizer folding the constants for you. You get the same exact timings either case since there's no actual computation to find the result of the exponentiation, only loading of values:
dis.dis(compile('4 ** 4', '', 'exec'))
1 0 LOAD_CONST 2 (256)
3 POP_TOP
4 LOAD_CONST 1 (None)
7 RETURN_VALUE
Identical byte-code is generated for '4 ** 4.'
with the only difference being that the LOAD_CONST
loads the float 256.0
instead of the int 256
:
dis.dis(compile('4 ** 4.', '', 'exec'))
1 0 LOAD_CONST 3 (256.0)
2 POP_TOP
4 LOAD_CONST 2 (None)
6 RETURN_VALUE
So the times are identical.
*All of the above apply solely for CPython, the reference implementation of Python. Other implementations might perform differently.
Solution 2
If we look at the bytecode, we can see that the expressions are purely identical. The only difference is a type of a constant that will be an argument of BINARY_POWER
. So it's most certainly due to an int
being converted to a floating point number down the line.
>>> def func(n):
... return n**4
...
>>> def func1(n):
... return n**4.0
...
>>> from dis import dis
>>> dis(func)
2 0 LOAD_FAST 0 (n)
3 LOAD_CONST 1 (4)
6 BINARY_POWER
7 RETURN_VALUE
>>> dis(func1)
2 0 LOAD_FAST 0 (n)
3 LOAD_CONST 1 (4.0)
6 BINARY_POWER
7 RETURN_VALUE
Update: let's take a look at Objects/abstract.c in the CPython source code:
PyObject *
PyNumber_Power(PyObject *v, PyObject *w, PyObject *z)
{
return ternary_op(v, w, z, NB_SLOT(nb_power), "** or pow()");
}
PyNumber_Power
calls ternary_op
, which is too long to paste here, so here's the link.
It calls the nb_power
slot of x
, passing y
as an argument.
Finally, in float_pow()
at line 686 of Objects/floatobject.c we see that arguments are converted to a C double
right before the actual operation:
static PyObject *
float_pow(PyObject *v, PyObject *w, PyObject *z)
{
double iv, iw, ix;
int negate_result = 0;
if ((PyObject *)z != Py_None) {
PyErr_SetString(PyExc_TypeError, "pow() 3rd argument not "
"allowed unless all arguments are integers");
return NULL;
}
CONVERT_TO_DOUBLE(v, iv);
CONVERT_TO_DOUBLE(w, iw);
...
Solution 3
Because one is correct, another is approximation.
>>> 334453647687345435634784453567231654765 ** 4.0
1.2512490121794596e+154
>>> 334453647687345435634784453567231654765 ** 4
125124901217945966595797084130108863452053981325370920366144
719991392270482919860036990488994139314813986665699000071678
41534843695972182197917378267300625
Related videos on Youtube
arieljannai
One day I arrived to the IDF's programming course without knowing what it's going to be like - surprisingly, I really liked it. It's been a few years since then, and now I have more than five years of experience - in middleware & performance, R&D, research and treatment of abnormal software faults and more fun things. Still surprised. Beside the tech side I also like sculpturing, traveling, soap bubbles and eating cookies.
Updated on September 11, 2020Comments
-
arieljannai over 3 years
Why is
x**4.0
faster thanx**4
? I am using CPython 3.5.2.$ python -m timeit "for x in range(100):" " x**4.0" 10000 loops, best of 3: 24.2 usec per loop $ python -m timeit "for x in range(100):" " x**4" 10000 loops, best of 3: 30.6 usec per loop
I tried changing the power I raised by to see how it acts, and for example if I raise x to the power of 10 or 16 it's jumping from 30 to 35, but if I'm raising by 10.0 as a float, it's just moving around 24.1~4.
I guess it has something to do with float conversion and powers of 2 maybe, but I don't really know.
I noticed that in both cases powers of 2 are faster, I guess since those calculations are more native/easy for the interpreter/computer. But still, with floats it's almost not moving.
2.0 => 24.1~4 & 128.0 => 24.1~4
but2 => 29 & 128 => 62
TigerhawkT3 pointed out that it doesn't happen outside of the loop. I checked and the situation only occurs (from what I've seen) when the base is getting raised. Any idea about that?-
Admin over 7 yearsFor what it's worth: Python 2.7.13 for me is a factor 2~3 faster, and shows the inverse behaviour: an integer exponent is faster than a floating point exponent.
-
dabadaba over 7 years@Evert yup, I got 14 usec for
x**4.0
and 3.9 forx**4
.
-
-
Jean-François Fabre over 7 yearswhy the downvote? conversion/variable type check seems to be the issue here. there's no speed difference with literals, between
12.0**40.0
and12**40
for instance. -
Dimitris Fasarakis Hilliard over 7 years@Jean-FrançoisFabre I believe that's due to constant folding.
-
miradulo over 7 yearsI think the implication that there is a conversion and they aren't handled differently down the line "most certainly" is a bit of a stretch without a source.
-
arieljannai over 7 yearsI also thought that way but couldn't really find any source for it
-
arieljannai over 7 yearsFrom the documentation on BINARY_POWER it's also not very indicative to that subject, it's just taking from the stack and raising
-
TigerhawkT3 over 7 years@Mitch - Particularly since, in this particular code, there's no difference in the execution time for those two operations. The difference only arises with the OP's loop. This answer is jumping to conclusions.
-
TigerhawkT3 over 7 yearsWhatever it is, it's related to the loop over a
range
, as timing only the**
operation itself yields no difference between integers and floats. -
user2357112 over 7 yearsWhy are you only looking at
float_pow
when that doesn't even run for the slow case? -
TigerhawkT3 over 7 yearsThe difference only appears when looking up a variable (
4**4
is just as fast as4**4.0
), and this answer doesn't touch on that at all. -
TigerhawkT3 over 7 yearsThe difference only appears when looking up a variable (
4**4
is just as fast as4**4.0
), and this answer doesn't touch on that at all. -
Dimitris Fasarakis Hilliard over 7 yearsBut, constants will get folded @TigerhawkT3 (
dis(compile('4 ** 4', '', 'exec'))
) so the time should be exactly the same. -
user2357112 over 7 years@TigerhawkT3:
4**4
and4**4.0
get constant-folded. That's an entirely separate effect. -
Graipher over 7 yearsYour last timings seem not to show what you say.
long(x)**2.
is still faster thanlong(x)**2
by a factor of 4-5. (Not one of the downvoters, though) -
arieljannai over 7 years@TigerhawkT3 @JimFasarakis-Hilliard The question began on the simple question, without noticing that the effect is happening only in a loop (and when the index is involved in in the pow operation -
i**2
or2**i
). But now, although there's a difference in the underline C code behavior, it doesn't seem to have such an affect on the pow operations when not on a running index. So how should we continue from here? the question changed, the answers are good but for the starting question. Would like to hear your advice on how to continue with the question, regarding the SO community standards. -
Dimitris Fasarakis Hilliard over 7 years@Graipher but that was my point. Objects of
<type long>
in Python2
display the same speed discrepancy as they do inPython 3
because they are implemented the same way. On the other hand, objects of<type int>
in Python2
(which don't exist in Python 3) are way speedier as people who've timed it have claimed. -
Dimitris Fasarakis Hilliard over 7 years@arieljannai outside a loop the values get folded by the interpreter so the operations are exactly the same, there's no additional computation required and no speed difference displayed. What do you mean by running on an index? If you're talking about operations other than
pow
then, of course, those would display other behavior since they're implemented differently. Could you elaborate on what still troubles you with this answer? -
arieljannai over 7 years@JimFasarakis-Hilliard Oh! the part that I was missing is that the interpreter handles it. What I meant with the index is that inside a loop, if I just run
2**17
or2**17.0
there won't be such a difference, but if I'm using the loop variable (fori
in ..)i**17
will be much slower thani**17.0
. But I believe that it still the same reason, since when it's a constant calculation the interpreter acts on it as if it was outside the loop and called a lot of times. -
arieljannai over 7 yearsSo when it's a constant calculations they are getting folded by interpreter (whether in a loop or not), but when a running index is involved - it's getting calculated in the loop and translated to what you've shown. Thanks!
-
mbomb007 over 7 yearsSo why did Python 3 make such a change if it has negative speed implications and can no longer use native types for integer operations?
-
Dimitris Fasarakis Hilliard over 7 years@mbomb007 the elimination of the
<type 'long'>
type in Python 3 is probably explained by the efforts made to simplify the language. If you can have one type to represent integers it is more manageable than two (and worrying about converting from one to the other when necessary, users getting confused etc). The speed gain is secondary to that. The rationale section of PEP 237 also offers some more insight. -
pydsigner over 7 yearsI'd like to throw out there that testing the Python 2 speed of
long(x) ** n
versusx ** n
is a bit of a red herring, as you're explicitly casting the intx
to a long. I'd be curious to see the speed comparison of4L ** n
to4 ** n
. -
Dimitris Fasarakis Hilliard over 7 years@pydsigner Good catch! You are indeed correct; I hadn't though of that. I don't want to deviate from the tests the OP used in his question so I'll just go on to state how fast
long(int_object)
is instead. -
Dimitris Fasarakis Hilliard over 7 yearsWould you change the line "it's most certainly due to an int being converted to a floating point number down the line"? Though a valid initial guess, this isn't the root cause (generally reformat your answer to include the update more gracefully.)
-
Dimitris Fasarakis Hilliard over 6 yearsI don't know why that downvoter downvoted but I did because this answer doesn't answer the question. Just because something is correct does not in any way imply it is faster or slower. One is slower than the other because one can work with C types while the other has to work with Python Objects.
-
Veky over 6 yearsThanks for the explanation. Well, I really thought it was obvious that it's faster to calculate just the approximation of a number to 12 or so digits, than to calculate all of them exactly. After all, the only reason why we use approximations is that they are faster to calculate, right?