Why is squaring a number faster than multiplying two random numbers?

26,071

Solution 1

  1. There exist algorithms more efficient than O(N^2) to multiply two numbers (see Karatsuba, Pollard, Schönhage–Strassen, etc.)

  2. The two problems "multiply two arbitrary N-bit numbers" and "Square an arbitrary N-bit number" have the same complexity.

We have

4*x*y = (x+y)^2 - (x-y)^2

So if squaring N-bit integers takes O(f(N)) time, then the product of two arbitrary N-bit integers can be obtained in O(f(N)) too. (that is 2x N-bit sums, 2x N-bit squares, 1x 2N-bit sum, and 1x 2N-bit shift)

And obviously we have

x^2 = x * x

So if multiplying two N-bit integers takes O(f(N)), then squaring a N-bit integer can be done in O(f(N)).

Any algorithm computing the product (resp the square) provides an algorithm to compute the square (resp the product) with the same asymptotic cost.

As noted in other answers, the algorithms used for fast multiplication can be simplified in the case of squaring. The gain will be on the constant in front of the f(N), and not on f(N) itself.

Solution 2

Squaring an n digit number may be faster than multiplying two random n digit numbers. Googling I found this article. It is about arbitrary precision arithmetic but it may be relevant to what your asking. In it the authors say this:

In squaring a large integer, i.e. X^2 = (xn-1, xn-2, ... , x1, x0)^2 many cross-product terms of the form xi * xj and xj * xi are equivalent. They need to be computed only once and then left shifted in order to be doubled. An n-digit squaring operation is performed using only (n^2 + n)/2 single-precision multiplications.

Solution 3

Like others have pointed out, squaring can only be about 1.5X or 2X faster than regular multiplication between arbitrary numbers. Where does the computational advantage come from? It's symmetry. Let's calculate the square of 1011 and try to spot a pattern that we can exploit. u0:u3 represent the bits in the number from the most significant to the least significant.

    1011 //                               u3 * u0 : u3 * u1 : u3 * u2 : u3 * u3
   1011  //                     u2 * u0 : u2 * u1 : u2 * u2 : u2 * u3       
  0000   //           u1 * u0 : u1 * u1 : u1 * u2 : u1 * u3                 
 1011    // u0 * u0 : u0 * u1 : u0 * u2 : u0 * u3                           

If you consider the elements ui * ui for i=0, 1, ..., 4 to form the diagonal and ignore them, you'll see that the elements ui * uj for i ≠ j are repeated twice.

Therefore, all you need to do is calculate the product sum for elements below the diagonal and double it, with a left shift. You'd finally add the diagonal elements. Now you can see where the 2X speed up comes from. In practice, the speed-up is about 1.5X because of the diagonal and extra operations.

Solution 4

I believe you may be referring to exponentiation by squaring . This technique isn't used for multiplying, but for raising to a power x^n, where n may be large. Rather than multiply x times itself N times, one performs a series of squaring and adding operations which can be mapped to the binary representation of N. The number of multiplication operations (which are more expensive than additions for large numbers) is reduced from N to log(N) with respect to the naive exponentiation algorithm.

Solution 5

Do you mean multiplying a number by a power of 2? This is usually quicker than multiplying any two random numbers since the result can be calculated by simple bit shifting. However, bear in mind that modern microprocessors dedicate lots of brute force silicon to these types of calculations and most arithmetic is performed with blinding speed compared to older microprocessors

Share:
26,071
Jess
Author by

Jess

Updated on January 14, 2020

Comments

  • Jess
    Jess over 4 years

    Multiplying two binary numbers takes n^2 time, yet squaring a number can be done more efficiently somehow. (with n being the number of bits) How could that be?

    Or is it not possible? This is insanity!