Why is quicksort better than mergesort?

1,282

Solution 1

Quicksort has O(n2) worst-case runtime and O(nlogn) average case runtime. However, it’s superior to merge sort in many scenarios because many factors influence an algorithm’s runtime, and, when taking them all together, quicksort wins out.

In particular, the often-quoted runtime of sorting algorithms refers to the number of comparisons or the number of swaps necessary to perform to sort the data. This is indeed a good measure of performance, especially since it’s independent of the underlying hardware design. However, other things – such as locality of reference (i.e. do we read lots of elements which are probably in cache?) – also play an important role on current hardware. Quicksort in particular requires little additional space and exhibits good cache locality, and this makes it faster than merge sort in many cases.

In addition, it’s very easy to avoid quicksort’s worst-case run time of O(n2) almost entirely by using an appropriate choice of the pivot – such as picking it at random (this is an excellent strategy).

In practice, many modern implementations of quicksort (in particular libstdc++’s std::sort) are actually introsort, whose theoretical worst-case is O(nlogn), same as merge sort. It achieves this by limiting the recursion depth, and switching to a different algorithm (heapsort) once it exceeds logn.

Solution 2

As many people have noted, the average case performance for quicksort is faster than mergesort. But this is only true if you are assuming constant time to access any piece of memory on demand.

In RAM this assumption is generally not too bad (it is not always true because of caches, but it is not too bad). However if your data structure is big enough to live on disk, then quicksort gets killed by the fact that your average disk does something like 200 random seeks per second. But that same disk has no trouble reading or writing megabytes per second of data sequentially. Which is exactly what mergesort does.

Therefore if data has to be sorted on disk, you really, really want to use some variation on mergesort. (Generally you quicksort sublists, then start merging them together above some size threshold.)

Furthermore if you have to do anything with datasets of that size, think hard about how to avoid seeks to disk. For instance this is why it is standard advice that you drop indexes before doing large data loads in databases, and then rebuild the index later. Maintaining the index during the load means constantly seeking to disk. By contrast if you drop the indexes, then the database can rebuild the index by first sorting the information to be dealt with (using a mergesort of course!) and then loading it into a BTREE datastructure for the index. (BTREEs are naturally kept in order, so you can load one from a sorted dataset with few seeks to disk.)

There have been a number of occasions where understanding how to avoid disk seeks has let me make data processing jobs take hours rather than days or weeks.

Solution 3

Actually, QuickSort is O(n2). Its average case running time is O(nlog(n)), but its worst-case is O(n2), which occurs when you run it on a list that contains few unique items. Randomization takes O(n). Of course, this doesn't change its worst case, it just prevents a malicious user from making your sort take a long time.

QuickSort is more popular because it:

  1. Is in-place (MergeSort requires extra memory linear to number of elements to be sorted).
  2. Has a small hidden constant.

Solution 4

"and yet most people use Quicksort instead of Mergesort. Why is that?"

One psychological reason that has not been given is simply that Quicksort is more cleverly named. ie good marketing.

Yes, Quicksort with triple partioning is probably one of the best general purpose sort algorithms, but theres no getting over the fact that "Quick" sort sounds much more powerful than "Merge" sort.

Solution 5

As others have noted, worst case of Quicksort is O(n^2), while mergesort and heapsort stay at O(nlogn). On the average case, however, all three are O(nlogn); so they're for the vast majority of cases comparable.

What makes Quicksort better on average is that the inner loop implies comparing several values with a single one, while on the other two both terms are different for each comparison. In other words, Quicksort does half as many reads as the other two algorithms. On modern CPUs performance is heavily dominated by access times, so in the end Quicksort ends up being a great first choice.

Share:
1,282
Jing
Author by

Jing

Updated on July 08, 2022

Comments

  • Jing
    Jing almost 2 years

    Scenario:

    In a Campsite, iPhone/iPods in different camp can connect, chat or share photos. I want to use wifi not bluetooth.

    Since there is no wifi internet around. One of the iPod should work as an wifi access point, like the concept in "wifi-direct", is it possible?

    Can iOS networking library do that?

  • jfs
    jfs over 15 years
    Actually, there are implementation of QuickSort which are O(n*log(n)), not O(n^2) in the worst case.
  • Cristian Ciupitu
    Cristian Ciupitu over 15 years
    It also depends on the computer architecture. Quicksort benefits from the cache, while MergeSort doesn't.
  • Jonathan Leffler
    Jonathan Leffler over 15 years
    Your second sentence says "...mergesort is potentially far slower than ... mergesort". The first reference should presumaably be to quicksort.
  • KriptSkitty
    KriptSkitty over 15 years
    Actually, if it is the qsort() library function you are talking about, it may or may not be implemented as quicksort.
  • CesarB
    CesarB over 15 years
    @J.F. Sebastian: These are most probably introsort implementations, not quicksort (introsort starts as quicksort and switches to heapsort if it is about to stop being n*log(n)).
  • Konrad Rudolph
    Konrad Rudolph over 15 years
    I've seen my share of radix sorts. But it's pretty hard to use because if analyzed correctly, its runtime is not O(n) as it depends on more than the number of input elements. In general, it's very hard to make that kind of strong predictions that radix sort needs to be efficient about the input.
  • Marcin
    Marcin over 15 years
    You can implement a mergesort in place.
  • Anders Eurenius
    Anders Eurenius over 15 years
    It is O(n), where n is the total input size, that is, including the size of the elements. It's true that you can implement it so you have to pad with a lot of zeroes, but it's nonsense to use a poor implementation for comparison. (That said, implementation can be hard, ymmv.)
  • KriptSkitty
    KriptSkitty over 15 years
    Konrad, sorry to be a bit anal about this, but where do you find that guarantee? I can't find it in the ISO C standard, or in the C++ standard.
  • Ash
    Ash over 14 years
    @Dark Shikari, I have made some corrections by editing your answer directly. See this page for a clear demonstration of the clarifications: sorting-algorithms.com/quick-sort In summary default quicksort is worst on a list with few unique items, not a reverse sorted list. Therefore this cannot be avoided by ensuring the list is randomly ordered.
  • Jason Orendorff
    Jason Orendorff over 14 years
    Note that if you're using GNU libc, qsort is a merge sort.
  • Jason Orendorff
    Jason Orendorff over 14 years
    GNU libc's qsort is a merge sort unless the number of elements is truly gigantic or the temporary memory can't be allocated. cvs.savannah.gnu.org/viewvc/libc/stdlib/…
  • Jason Orendorff
    Jason Orendorff over 14 years
    Er, to be precise, it's a merge sort unless the necessary temporary memory can't be allocated. cvs.savannah.gnu.org/viewvc/libc/stdlib/…
  • Sev
    Sev over 13 years
    The Wikipedia article states it switches to heapsort, not mergesort...just FYI.
  • Konrad Rudolph
    Konrad Rudolph over 13 years
    @Sev: … as does the orignal paper. Thanks for pointing out the mistake. – Not that it really matters, since their asymptotic running time is the same.
  • codeObserver
    codeObserver about 13 years
    why is this selected as the correct answer ?. All it explains is how quick sorts problems be patched. It still doesnt tell why quick sort is used more than other ?. Is the answer "quick sort is used more than other because after one depth you can switch to heapsort"? .. why not use heapsort in the first place then ? .. just trying to understand ...
  • Konrad Rudolph
    Konrad Rudolph about 13 years
    @p1 Good question. The real answer is that on average, for average data, quicksort is faster than merge sort (and heap sort, for that matter), and even though the worst case of quicksort is slower than merge sort’s, this worst case can be mitigated very easily (hence my answer).
  • Jing
    Jing over 12 years
    I'm doing a research, my supervisor want me to examine if p2p via wifi on iPhone is possible. so is it iphone hare ware not supporting adhoc mode wifi ?
  • user1520427
    user1520427 over 11 years
    @KonradRudolph What do you mean when you say "quicksort is faster than mergesort"? Are you talking about the theoretical analysis, or the implementation in practice? It was my understanding that mergesort is better in terms of analysis, but due to caching quicksort is often preferred (disregarding all other factors).
  • Konrad Rudolph
    Konrad Rudolph over 11 years
    @user1520427 I was talking about performance in practice. I haven’t done a rigorous analysis (i.e. not merely in terms of big-O) of the number of comparisons necessary in mergesort – I suspect it might even be lower than that in quicksort on average.
  • CaTalyst.X
    CaTalyst.X about 11 years
    How does Quicksort have a better average case complexity? They are both O(nlgn). I would argue that an attacker wont be providing input to any sorting algorithm...but in the interest of not assuming security by obscurity, lets assume he could. While n^2 running time is worse than nlgn, it is not sufficiently worse that a web server would crash based on a single attack. In fact, the DOS argument is pretty much null, because any web server is vulnerable to a DDOS attack, and it is more likely for an attacker to use a distributed network of hosts, all TCP SYN flooding.
  • ThE uSeFuL
    ThE uSeFuL about 11 years
    @Dark Shikari, Why do you say that merge sort requires more memory compared to the quick sort?
  • lanoxx
    lanoxx almost 11 years
    Mergesort can be implemented in-place, such that it does not need extra space. For example with a double linked list: stackoverflow.com/questions/2938495/…
  • Stan R.
    Stan R. almost 11 years
    I wouldn't say merge sort uses tons of extra memory, it uses O(n) space...since it uses an auxiliary array.
  • chutsu
    chutsu about 10 years
    Very nice, didn't think about the assumptions made for accessing the data structure. Good insight :)
  • Shashwat
    Shashwat almost 10 years
    Quicksort is better in terms of memory as well.
  • Konrad Rudolph
    Konrad Rudolph almost 10 years
    @Shashwat That is indeed true when compared to merge sort, but my answer applies more generally in comparison to other sorting methods as well, and there the memory aspect is no longer true. For instance, heapsort uses less memory than quicksort (O(1) vs O(log n)).
  • Clearer
    Clearer over 9 years
    Merge sort is only stable if the merge algorithm is stable; this is not guaranteed.
  • Clearer
    Clearer over 9 years
    Merge sort may be implemented in a way that only requires O(1) extra storage, but most of those implementations suffer greatly in terms of performance.
  • Aidiakapi
    Aidiakapi about 9 years
    @CristianCiupitu Shouldn't that be the other way around? Merge sort accesses data sequentially, causing (both target and destination) cache lines to stay in memory. Quicksort can on the other hand be all over the place. Not to say that mergesort is better, but I don't think this is one of its downsides.
  • Cristian Ciupitu
    Cristian Ciupitu almost 9 years
  • Aidiakapi
    Aidiakapi almost 9 years
    @CristianCiupitu I'm aware that Quicksort utilizes the cache, what I'm disagreeing with is your assertion that merge sort doesn't. Merge sort will generally keep both arrays in the cache, and it pretty much exclusively accesses data sequentially, which is the best-case for the cache. Quicksort has an edge over merge sort because of many factors, such as not requiring a secondary array, double pivot scenarios. But cache locality is a strong point for both algorithms.
  • James Wierzba
    James Wierzba almost 9 years
    Can you explain what you mean by "seek to disk" does it mean searching for some single value when the data is stored on disk?
  • david_adler
    david_adler over 8 years
    @Marcin Implementing in-place mergesort is notoriously complicated and often leads to many more swaps which overpower the efficiency gains of reduced memory usage see penguin.ewu.edu/cscd300/Topic/AdvSorting/MergeSorts/…
  • nclark
    nclark almost 8 years
    @JamesWierzba I take it from context that he means "seeking to a location on disk". "Seeking" on a rotating disk device means, picking up the read head and moving it to a new absolute address, which is a notoriously slow operation. When you access data in the order it was stored, the disk hardware doesn't have to seek, it just plows along at high speed, reading items sequentially.
  • paul23
    paul23 over 7 years
    Worst case of quicksort is O(n), mergesort O(n log n) - so ther'es a big difference there.
  • paul23
    paul23 over 7 years
    worst case quicksort is O(n^2) - can't edit my previous comment and made a typo
  • SmallChess
    SmallChess over 7 years
    Not exactly true. Quicksort scans the inputs linearity and allow more disk access caching than Mergesort. This answers says otherwise.
  • sam
    sam almost 7 years
    Can some explain this a bit more? This is how I am seeing it: Quicksort: If we are going with random pivot, the call stack has fragments of the array partitioned in a random way. This requires random access. However, for each call in the stack, both left and right pointers move sequentially. I am assuming these would be kept in the cache. The swaps are operations again on information that's in cache (and eventually written to Disk). (continued in my next comment)
  • sam
    sam almost 7 years
    MergeSort: The call stack builds by dividing the array logarithmically, going depth-first. And, merging from bottom (left-most of the array) to up. Dividing the array part can be done with just indices. So, no need to jump randomly across the array. However, when merging the additional/result array would be built/dumped-to in sequential write. Is this correct?
  • piotrek
    piotrek over 6 years
    quick-sort is NOT in-place. in place means O(1) additional memory. quick-sort needs recursion or a stack and therefore uses O(log n) additional memory
  • Winter Melon
    Winter Melon over 6 years
    quicksort benefits from randomness of pivot selection. The random pivot would naturally tend toward 50:50 partition and is unlikely to be consistently towards one of the extremes. The constant factor of nlogn is fairly low till average partitioning is 60-40 or even till 70-30.
  • ssd
    ssd about 6 years
    Just a contribution avoiding the costly disk read / write overhead: When sorting very large data that needs disk access, it's advantageous to switch the direction of sort for each pass. That is, at the very top level of the loop, once you go from 0 towards n and the next time you go from n towards 0. This brings the advantage of retreating (sorting) the data blocks that are already available in the memory (cache) and attacking twice for only one disk access. I think most DBMS's use this optimization technique.
  • Jim Balter
    Jim Balter over 5 years
    This is complete nonsense. quicksort is used because of its performance, not "philosophy" ... and the claims about "order is bound to be lost" is simply false.
  • Jim Balter
    Jim Balter over 5 years
    No, QuickSort's worst case does not happen when the array is already sorted, unless you use the first or last item as the pivot, but no one does that.
  • Jim Balter
    Jim Balter over 5 years
    Pivots have nothing to do with why QS has fewer recursive calls ... it's because half of QS's recursion is tail recursion, which can be eliminated.
  • Jim Balter
    Jim Balter over 5 years
    That the pivots aren't part of the sorts at the next level isn't why QS is more performant. See the other answers for additional insight.
  • Jim Balter
    Jim Balter over 5 years
    "Actually, QuickSort is O(n2)" -- Actually, this is false.
  • Jim Balter
    Jim Balter over 5 years
    @piotrek, yes it is in place, because the extra memory is for recursion, not data. And on a 64 bit machine, QS never requires more 64 words of additional memory, which is preallocated. (More than 64 won't be used even for a bad dataset as long as the shorter subsequence is sorted first, which is what all practical implementations do.)
  • Jim Balter
    Jim Balter over 5 years
    "the existing great answers" -- which are those? I can't locate them.
  • Jim Balter
    Jim Balter over 5 years
    @Clearer It's guaranteed if <= is used for comparisons rather than <, and there's no reason not to.
  • Jim Balter
    Jim Balter over 5 years
    O(2n) == O(n). The correct statement is that Mergesort needs O(n) additional memory (more specifically, it needs n/2 auxilliary memory). And this isn't true for linked lists.
  • Jim Balter
    Jim Balter over 5 years
    "This can be mitigated by random shuffle before sorting is started. " - er, no, that would be expensive. Instead, use random pivots.
  • Jim Balter
    Jim Balter over 5 years
    @anujpradhan "This is what a book can't teach" -- Oh, really? Is that a law of physics? Because I learned it from books.
  • Jim Balter
    Jim Balter over 5 years
    "Quicksort has a better average case complexity" -- no it doesn't.
  • Jim Balter
    Jim Balter over 5 years
    @paul23 comments can be deleted. Also, the answer already addressed your point: "in most real-world data it is possible to make design choices which minimize the probability of requiring quadratic time"
  • Clearer
    Clearer over 5 years
    @JimBalter I could easily come up with an unstable merge algorithm (quicksort for example, would serve that role). The reason why quick sort is faster than merge sort in many cases is not because of reduced overhead but because of how quicksort accesses data, which is a lot more cache friendly than a standard mergesort.
  • Jim Balter
    Jim Balter over 5 years
    @Clearer quicksort is not a merge sort ... your Dec 21 '14 statement that I responded to was strictly about merge sort and whether it is stable. quicksort and which is faster is not at all relevant to your comment or my response. End of discussion for me ... over and out.
  • Clearer
    Clearer over 5 years
    quick sort can easily be used to merge two arrays into one (copy the arrays into an array and sort it -- done). This is a very poor way of merging though, but does illustrate that it is possible to make a merge algorithm that is not stable (or efficient). My comment about why quick sort may be faster than merge sort was aimed at the original answer.
  • Nick Gallimore
    Nick Gallimore over 5 years
    What is the worst case runtime for quicksort when using median 3 to select the pivot? Would it still be O(n^2)? I see suggestions about versions of pure quicksort being O(nlogn) I am guessing this would only be possible by picking a correct pivot everytime.
  • Nick Gallimore
    Nick Gallimore over 5 years
    See here for in place mergesort that runs in O(n^2) stackoverflow.com/questions/2571049/…
  • Soner from The Ottoman Empire
    Soner from The Ottoman Empire over 5 years
    @JimBalter Sir, would you mind sharing your brilliant and worthwhile ideas with us about their perfomances as an answer of the question? Thanks in advance.
  • Jim Balter
    Jim Balter over 5 years
    in place mergesort that runs in O(n^2) -- I think you misread or mistyped that. In-place merge sort can be done in O(n * (logn)^2), and there are a couple of examples on that page.
  • Jim Balter
    Jim Balter over 5 years
    @nfgallimore when using median 3 to select the pivot? Would it still be O(n^2)? -- Yes, certainly ... it should be clear that which element is chosen as the pivot can't change the worst case. versions of pure quicksort being O(nlogn) -- there's no such animal. But impure versions, like IntroSort and pdqsort (github.com/orlp/pdqsort) do have O(nlogn) worst case.
  • Jason
    Jason over 5 years
    At least two people here have mentioned that quicksort is better than mergesort for caches. I feel that this is wrong. The last call of the partition method in quicksort can exchange elements in the array that are nowhere near each other, causing cache misses.
  • Nick Gallimore
    Nick Gallimore about 5 years
    @jimbalter can you give an example data set where median 3 is in O(nlogn)
  • Nick Gallimore
    Nick Gallimore about 5 years
    Does not answer question about which is better. The name of the algorithm is irrelevant in determining which is better.
  • Jim Balter
    Jim Balter about 5 years
    @nfgallimore quicksort with median 3 pivot on an ordered list is of course O(nlogn). Perhaps you meant isn't O(nlogn), or is O(n^2) ... I won't give an example because it takes effort to create one. Examples aren't relevant ... logic is.
  • Nick Gallimore
    Nick Gallimore about 5 years
    Yes I meant isn't in O(nlogn)
  • eonil
    eonil almost 5 years
    Does qsort beats mergesort even for sorted inputs if comparison is cheap?
  • RvPr
    RvPr almost 5 years
    @JimBalter Which "other answers" are you referring to? The top answer just says that QS "requires little additional space and exhibits good cache locality" but gives no explanation as to why that is, nor does it provide any citations. The 2nd answer simply says that merge-sort is better for larger data-sets
  • Jim Balter
    Jim Balter almost 5 years
    You're moving the goalposts, from why QS is more performant to explaining basic facts about how it operates. Answers to other questions do that: stackoverflow.com/questions/9444714/… ... I hope that's enough for you; I won't respond further.
  • Admin
    Admin over 4 years
    @JimBalter "yes it is in place, because the extra memory is for recursion, not data" - that's a false dichotomy. Recursion is used to store data (function parameters). If you couldn't do recursion you would still need an explicit stack. Same amount of memory is consumed.
  • Jim Balter
    Jim Balter over 4 years
    ^ This is tiresome. As every honest person knows, quicksort is "in-place" because it writes the sorted result into the original buffer. The absurd claim by piotek that it's not in-place because it uses additional memory is wrong both semantically and in practice. As I said, "on a 64 bit machine, QS never requires more [than] 64 words of additional memory, which is preallocated". It's a non-recursive function that uses 64 words of stack space. The stack space is already allocated ... no additional memory is consumed and there is no allocation code; the stack pointer is simply moved by 64 words.
  • supercat
    supercat about 4 years
    Do any variations of Quick Sort notify the comparison function about partitions, in such a way that would allow it to exploit situations where a substantial portion of the key will be the same for all items in a partition?
  • Midhunraj R Pillai
    Midhunraj R Pillai over 3 years
    Will anyone elaborate the term "Dropping the indices"?
  • Null_Space
    Null_Space over 3 years
    Randomization takes O(n)? O(n log n) in expectation.
  • Julian
    Julian about 3 years
    On one point I disagree: randomized pivot selection is not an excellent strategy. Firstly, because picking any single element as the pivot will not produce very good pivots on average (you'll get better behavior statistically by selecting the median of three elements) and secondly, because generating a random number is expensive. With introsort, you don't need randomization in order to protect against median-of-three killer sequences.
  • Konrad Rudolph
    Konrad Rudolph about 3 years
    @Julian A real statistical analysis is a bit more complex and I forgot the details, but unless you have a good reference I don’t believe that median of three is better than random pivot (the probability of getting super-O(n log n) runtime can be proved to be exponentially low). In fact, the random pivot performs excellent in practice. Its major drawback (and why it’s rarely used in standard libraries) is that it modifies the global random state. Efficiency of the RNG is an issue, true, but there are very efficient PRNGs.
  • Julian
    Julian about 3 years
    Here's your reference: citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.14.8162. To be honest, I think it straightforward that the median of three elements is more likely to be close to the middle of the range than just any single element.
  • Konrad Rudolph
    Konrad Rudolph about 3 years
    @Julian It’s straightforward only if you pick those three elements at random! And your reference (which I know very well) does not mention the expected runtime of randomised quicksort. In fact, it does not discuss randomised quicksort at all, besides noting the same caveat mentioned in my previous comment.
  • Julian
    Julian about 3 years
    It does, see "Choosing a partitioning element" on page 1254. And the data are randomized to start with (otherwise there would be no need to sort them), so whether you pick the first three elements, three random element or first-middle-last, you'll get the same sochastic behavior. First-middle-last however better copes with data that are already somewhat sorted or reverse-sorted.
  • Konrad Rudolph
    Konrad Rudolph about 3 years
    @Julian “unsorted” ≠ “random”. The section you’re quoting doesn’t compare the runtime, it compares the number of comparisons. Obviously we can do better than random here — just picking the true median every time does better, but is prohibitively expensive (as the same section explains). Calculating the median of three random elements — in the general case — absolutely doesn’t have the same stochastic properties as choosing three fixed elements (although I’ll admit that the difference is, in practice, very slight).