What is the fastest algorithm to calculate all factors of an integer number?

30,369

Until now no one has come up with a much faster algorithm. This doesn't necessarily have to mean there isn't any as on the other hand it also hasn't been proven that it is impossible to do it much faster.

One optimization you might want to take into account is that it's not necessary to search up to n/2 you can already stop when sqrt (n) is reached.

... and be sure to to choose a different storage location for your numbers if you really want to return a list of all candidates found as already mentioned in "chris" comment.

EDIT:

As I was adverted that there is a rather large variety of algorithms available that in terms of time complexity might run a little bit faster than the one you asked about it might be indicated to add a few more words than the short comment given above.

While besides the most obvious possibility to safe some computation time by just running the loop in steps of 2 after first dividing it down to an odd number there are still some other tricks available I didn't mention them as substantially faster in the answer given above.

The main reason that led to this decision is, that while for example cutting to number of iterations down by a factor of 2 seems like a big win to make in comparison to the expected growth in runtime with growing numbers a gain measured in terms of a constant becomes so small that in complexity theory there even won't be made any difference at all and both algorithms will be said to have (almost) the same time complexity.

Even if it was possible to get a constant gain of hundreds of a billion times the runtime of the original algorithm there still wouldn't be made any difference between both of them at all.

The bigger the numbers get the less influence any constant be it as big as you may ever be able to image plays in terms of runtime if it is also rapidly growing with the magnitude of the number you are adressing.

One very special barrier in terms of time complexity often regarded as the border between practically feasible and merely impossible is that of so called polynomial runtime.

This means nothing else than that even though the runtime might grow drastically with growing n it is still possible to describe this growth with a constant exponent k, such that the runtime is something around n^k.

On the other hand an algorithm without polynomial runtime can't be measured with any exponent how ever big you may want to make it.

To give an example why this difference might really matter at all let's take a look at two imaginary algorithms. The first one having polynomial runtime, say n^10 and just another one say this one with runtime n!.

While it doesn't seem to bad for small numbers, let's say n is just 10 here algorithm one takes 10^10 = 10000000000 time units while with only 3628800 units our second algorithm seems to run even a lot faster.

The problem with our algorithm two is, that compared to algorithm one its runtime will grow dramatically faster. At n=100 you get something like: 100000000000000000000 for algorithm one while it is already something like 93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000 for algorithm two.

Pushing frontiers even further with n=1000 we end up with: algorithm one at 1000000000000000000000000000000 while our second algorithm will take something like 402387260077093773543702433923003985719374864210714632543799910429938512398629020592044208486969404800479988610197196058631666872994808558901323829669944590997424504087073759918823627727188732519779505950995276120874975462497043601418278094646496291056393887437886487337119181045825783647849977012476632889835955735432513185323958463075557409114262417474349347553428646576611667797396668820291207379143853719588249808126867838374559731746136085379534524221586593201928090878297308431392844403281231558611036976801357304216168747609675871348312025478589320767169132448426236131412508780208000261683151027341827977704784635868170164365024153691398281264810213092761244896359928705114964975419909342221566832572080821333186116811553615836546984046708975602900950537616475847728421889679646244945160765353408198901385442487984959953319101723355556602139450399736280750137837615307127761926849034352625200015888535147331611702103968175921510907788019393178114194545257223865541461062892187960223838971476088506276862967146674697562911234082439208160153780889893964518263243671616762179168909779911903754031274622289988005195444414282012187361745992642956581746628302955570299024324153181617210465832036786906117260158783520751516284225540265170483304226143974286933061690897968482590125458327168226458066526769958652682272807075781391858178889652208164348344825993266043367660176999612831860788386150279465955131156552036093988180612138558600301435694527224206344631797460594682573103790084024432438465657245014402821885252470935190620929023136493273497565513958720559654228749774011413346962715422845862377387538230483865688976461927383814900140767310446640259899490222221765904339901886018566526485061799702356193897017860040811889729918311021171229845901641921068884387121855646124960798722908519296819372388642614839657382291123125024186649353143970137428531926649875337218940694281434118520158014123344828015051399694290153483077644569099073152433278288269864602789864321139083506217095002597389863554277196742822248757586765752344220207573630569498825087968928162753848863396909959826280956121450994871701244516461260379029309120889086942028510640182154399457156805941872748998094254742173582401063677404595741785160829230135358081840096996372524230560855903700624271243416909004153690105933983835777939410970027753472000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000.

If you don't believe just calculate it yourself. The bc manual even contains an example of how to implement the factorial function.

But don't get dizzy while counting digits... It might be interesting to know how many trailing zeroes we would have have to add to 10 to get the factor by which to multiply the age of our universe to get such a big span of time even if we measured in units of planck time. Unfortunately I don't know.

What's interesting about all that is the fact that until today there isn't any known algorithm that can perform factorization in polynomial time.

As it is not only an interesting field of research int itself the practical impossibility to factorize large integer numbers also plays an important role in the RSA public key encryption algorithm which is in widespread use today, so almost naturally there has already been a lot of resarch in this area.

Discovering algorithms that (without breaking the barrier already mentioned) run sligthly faster than the algorithm you supposed.

As "Jim Balter" alredy correctly annotated in his comment you might want to take a look at the referenced article (see: http://en.wikipedia.org/wiki/Integer_factorization#General-purpose) to take a look upon the methods others already came up with.

This article also mentioned by "Jim" might be another interesting resource: (see: What is the fastest factorization algorithm?)

Another interesting link to take a look upon might be the list of the RSA factoring challange winners of the past years to somehow get an idea where the border between feasible and almost impossible may lies today. (http://en.wikipedia.org/wiki/RSA_Factoring_Challenge)

Share:
30,369
Biswajit
Author by

Biswajit

I am a Computer Science & Engineering student.I love programming a lot and wanna become an expert coder in future and contribute something to the field of coding .

Updated on March 30, 2020

Comments

  • Biswajit
    Biswajit about 4 years

    I have written this block of code but it is consuming lots of time to calculate... Can you help me finding out an efficient way to do it?

    int tag;
    int* factors(int n)
    {
        int a[1000000];
        for(int i=1;i<=n/2;i++)
            if(n%i==0)
                a[++tag]=i;
        a[++tag]=n;
        return(a);
    }
    

    This brute force method is very hefty in terms of complexity... Is there any better feasible solution to this problem?

  • Biswajit
    Biswajit about 11 years
    Okay...thats not a big problem.. I just forgot to make the storage location as global...
  • Jonathan Leffler
    Jonathan Leffler about 11 years
    You can improve on the algorithm shown by only trying to divide by prime numbers — there are far fewer primes than there are composite numbers.
  • Jim Balter
    Jim Balter about 11 years
    "Until now no one has come up with a much faster algorithm." -- Utter nonsense. a) This code divides by every integer, including even ones. b) It divides by values up to n/2 rather than sqrt(n). c) It doesn't divide out a factor once it finds one.
  • Jim Balter
    Jim Balter about 11 years
    And beyond that there is en.wikipedia.org/wiki/Integer_factorization#General-purpose ... So there are far faster algorithms than the OP's code.
  • mikyra
    mikyra about 11 years
    okay maybe I should have stressed the much a little more. As at least regarding large numbers in terms of time complexity the usual notion is that the border line between anyhow feasible and practically almost impossible is P it was merely the fact that there isn't any algorithm running polynomial in time. I'll try to clarify that.
  • Jim Balter
    Jim Balter about 11 years
    Look, I know about all that but the OP doesn't. The fact is that your answer is very poor ... the OP can get much faster results for numbers in a reasonable range.
  • Jim Balter
    Jim Balter about 11 years
    BTW, if you look to the right under "Related", you will see a slew of far better and more sensible answers.
  • mikyra
    mikyra about 11 years
    I strongly doubt your assumption really holds as the range of numbers adressed by OP is reasonably small enough to process the values just the way he intented. Even allowing a broader range than the one he adresses by passing UINT_MAX - taking a closer look you'll see he just passes an int! The runtime including process starting time got reported as this: real m0.013s user0m0.008s sys 0m0.004s which seems reasonable fast to me.
  • Jim Balter
    Jim Balter about 11 years
    "that in complexity theory there even won't be made any difference at all and both algorithms will be said to have (almost) the same time complexity." -- You don't really understand complexity theory or its significance. One can take a program with an O(n) algorithm and make it take a trillion times as long for all inputs and it's still O(n) -- that's a far cry from "there even won't be made any difference at all". There's only no difference at the limit of n -> infinity. And in any case, the OP's algoritm is O(n) but there are considerably faster algorithms with lower O.
  • Jim Balter
    Jim Balter about 11 years
    Egads, you actually said this explicitly: "Even if it was possible to get a constant gain of hundreds of a billion times the runtime of the original algorithm there still wouldn't be made any difference between both of them at all" -- that's downright incoherent.
  • mikyra
    mikyra about 11 years
    shrug don't see the incoherency here - it's just the same expressed in other words. I'm almost sure you didn't get it right: ...that's a far cry from "there even won't be made any difference at all". There's only no difference at the limit of n -> infinity".. essentially what I was trying to explain that it's only the limit of n -> infinity that counts in complexity_theory at all. (see: en.wikipedia.org/wiki/Computational_complexity_theory)
  • mikyra
    mikyra about 11 years
    BTW.: while at it OPs algorithm can never never never ever be O(n). That would be a mere dream. In fact until now no one knows any O(n^k) method with any fixed k. The n to be taken into account here is the bitlength of the input not its magnitude. That's exactly why it's so hard to solve - the number of candidates to test doubles with every additional byte. Hence OPs algorithm is O(2^n)
  • mikyra
    mikyra about 11 years
  • Jim Balter
    Jim Balter about 11 years
    "don't see the incoherency here" -- right, there's no difference at all between two programs, one of which takes hundreds of billions of times longer to run than another. Because -- theory! Sorry, but it's theory misunderstood and misapplied. "The n to be taken into account here is the bitlength of the input not its magnitude. " -- The n is whatever we want it to be. Over and out.
  • Jim Balter
    Jim Balter about 11 years
    "it's only the limit of n -> infinity that counts in complexity_theory at all. " -- Yes, but complexity theory isn't the only thing that counts; that's what you seem unable to understand. Taking hundreds of billions of times longer to execute does count. All the OP cares about is how long it takes the code to run for the n that the OP is using ... the question was not about complexity theory. Now I'm really done.
  • mikyra
    mikyra about 11 years
    Why not? We're just littering the whole page if we continue like that. I know what you are talking about - but see it's the number of bits that gets counted as input the value encoded by the the bit pattern. What you are talking about is called pseudo polynomial complexity and there you are definately right OPs algorithm is for sure pseudo polynomial with complexity O(n). What's usually taken in account is the number of bits. That's exactly why RSA-keys are labeled according to the number of bits. Adding one bit doubles the number of candidates. That's what leads to exponential explosion.
  • mikyra
    mikyra about 11 years
    shrug don't know - OP stated: "This brute force method is very hefty in terms of complexity... Is there any better feasible solution to this problem?" Thought he might be interested in complexity (theory) then.
  • Jim Balter
    Jim Balter about 11 years
    Point taken about that quote. Sorry, my bad.