Time complexity of memory allocation

24,654

Solution 1

One of the things you have to realise when dealing with O notation is that it is often very important to understand what n is. If the n is something related to something you can control (eg: the number of elements in a list you want sorted) then it makes sense to look hard at it.

In most heap implementations, your n is the number of contiguous chunks of memory the manager is handling. This is decidedly not something typically under client control. The only n the client really has control over is the size of the chunk of memory she wants. Often this bears no relation to the amount of time the allocator takes. A large n could be as quickly allocated as a small n, or it might take much longer, or it might even be unservicable. All this could change for the same n depending on how previous allocations and deallocations from other clients came in. So really, unless you are implementing a heap, then the proper answer is that it is non-deterministic.

This is why hard realtime programmers try to avoid dynamic allocation (after startup).

Solution 2

The time complexity for a heap allocator can be different on different systems, depending on what they might be optimizing for.

On desktop systems, the heap allocator probably uses a mixture of different strategies including caching recent allocations, lookaside lists for common allocation sizes, bins of memory chunks with certain size characteristics, etc. to try an keep allocation time down but also keep fragmentation manageable. See the notes for Doug Lea's malloc implementation for an overview of the various techniques that are used: http://g.oswego.edu/dl/html/malloc.html

For simpler systems, a strategy of first fit or best fit might be used, with the free blocks stored on a linked list (which would give a O(N) time with N being the number of free blocks). But a more sophisticated storage system such as an AVL tree might be used to be able to locate free blocks in O(log N) time (http://www.oocities.org/wkaras/heapmm/heapmm.html).

A realtime system might use an heap allocator like TLSF (Two-Level Segregate Fit), which has a O(1) allocation cost: http://www.gii.upv.es/tlsf/

Solution 3

Just two remarks:

  • TLSF is O(1) in the sense that is has not a single loop; and manages up to 2Gb. Although it is really hard to believe, just check the code.

  • It is not true that the "best fit" policy (find the tight block) is the most suitable to achieve small fragmentation. It is far from trivial to demonstrate this claim, in fact it has not been formally proven but there are lot of evidences that go in that direction. (nice research topic).

Solution 4

I would think it would generally be O(n) where n is the number of available memory blocks (since you have to scan the available memory blocks to find a suitable one).

Having said that, I've seen optimizations that can make it faster, specifically maintaining multiple lists of available blocks depending on their size ranges (so blocks less than 1k are in one list, blocks from 1k to 10k are in another list and so on).

This is still O(n) however, just with a smaller n.

I'd be interested in seeing your source that there's a heap allocation that's unbounded (if, by that, you mean it could take forever).

Solution 5

Just check out how typical allocators work.

A bump-the-pointer allocator works in O(1), and it's a small '1' at that.

For a segregated-storage allocator, allocation of k bytes means "return the first block from List(n)" where List(n) is the list of blocks of n bytes where n >= k. It might find that List(n) is empty, so that a block from the next list (List(2n)) would have to be split with both resulting blocks of n bytes being put onto List(n), and this effect might ripple through all availlable sizes, making for a complexity of O(ns) where ns is the number of different sizes availlable, and ns = log (N) where N is the size of the largest block size availlable, so even that would be small. In most cases, especially after a number of blocks have been allocated and deallocated, complexity is O(1).

Share:
24,654
dsimcha
Author by

dsimcha

Updated on July 09, 2022

Comments

  • dsimcha
    dsimcha almost 2 years

    What is the time complexity of dynamic memory allocation using new, malloc, etc.? I know very little about how memory allocators are implemented, but I assume the answer is that it depends on the implementation. Therefore, please answer for some of the more common cases/implementations.

    Edit: I vaguely remember hearing that heap allocation is unbounded in the worst case, but I'm really interested in the average/typical case.

  • Daniel Spiewak
    Daniel Spiewak over 15 years
    There could be a really bad implementation of malloc which tried to move things around and assign an optimal memory block when the heap is nearly full (an NP-complete task). It should still terminate though in finite memory.
  • doppelfish
    doppelfish over 15 years
    Dynamic memory is typically needed when the amount of data to be processed cannot be determined before run time. Memory allocated typically translates into processing time. So, it's not so much about run time of allocation, but the need to have heap memory doesn't arise in the first place.
  • T.E.D.
    T.E.D. about 14 years
    Well, it's only really needed when the upper limit of the amount cannot be reasonably determined before runtime. If you can limit the amount at compile time, and you have enough RAM, you can just preallocate the max.
  • Daniel Stutzbach
    Daniel Stutzbach about 14 years
    You mean "the proper answer is that it is not well defined". "Non-deterministic" means something different. See en.wikipedia.org/wiki/Nondeterministic_algorithm
  • T.E.D.
    T.E.D. about 14 years
    @Daniel - I'm using the term as it is commonly used in Realtime programming circles. For example, my RTOS docs contain a table of "non-deterministic C RTL calls", and There's an entire page on "Deterministic Memory" (as opposed to Window's non-deterministic memory). As a proud holder of an MS in CS, I do know where you are coming from, and I am sorry.
  • Number47
    Number47 over 10 years
    @T.E.D , isn't your last sentence opposing your conclusion, that we shouldn't be interested in the complexity? I'm in a situation where, can't reasonably bound the amount of needed space, so I'm thinking about using some array approach with eventual lazy copying. Without any clue about the performance of compiler's algorithm, I can't decide will my solution will be better or worse.
  • user207421
    user207421 about 9 years
    Quite so. It has always seemed to me that best-fit is the worst policy, and worst-fit the best policy.
  • Eric Canton
    Eric Canton over 4 years
    Not that it's hard to find, but the URL to Doug Lea's malloc has changed slightly: gee.cs.oswego.edu/dl/html/malloc.html
  • Pavel Kirienko
    Pavel Kirienko about 4 years
    The statement "you have to scan the available memory blocks to find a suitable one" is not correct in general because there are well-known memory allocation algorithms that are constant-time (e.g., buddy allocator, half-fit, TLSF). Embedded software engineers sometimes appear to have a somewhat distorted view of memory allocation algorithms and their properties.
  • Pavel Kirienko
    Pavel Kirienko about 4 years
    Given that this is the accepted answer, I think it would benefit from mentioning that O(1) memory allocation algorithms exist and are leveraged in a certain class of hard real-time applications (e.g., buddy allocator, half-fit, TLSF). Worst-case fragmentation is also well understood and one can compute an upper bound for the amount of memory required to ensure that catastrophic fragmentation does not occur in a given application. Embedded developers sometimes appear to hold misconceptions about dynamic memory management algorithms and their properties.
  • T.E.D.
    T.E.D. about 4 years
    @PavelKirienko - I've used TLSF myself. However, very few (none that I know of actually) OS's use something like it for their standard OS memory allocations that you'd get when you do a C malloc or a C++ new.
  • Pavel Kirienko
    Pavel Kirienko over 3 years
    This should be the accepted answer. In the interest of displacing the common misconception that memory allocation is inherently non-time-deterministic, here is an O(1) allocator for embedded systems I made: github.com/pavel-kirienko/o1heap
  • Pavel Kirienko
    Pavel Kirienko over 3 years
    @T.E.D. Yes, indeed. As far as I see, open-source real-time OS are commonly shipped with traditional allocators borrowed from conventional (non-RT) systems, where the core assumptions and trade-offs are incompatible with RT. A typical O(n) best-fit allocator may perform better on average (both time- and fragmentation-wise) but an embedded system would typically be concerned about the worst case, where traditional approaches perform poorly. I think you should mention this aspect in your answer, considering its popularity.
  • porton
    porton about 3 years
    gii.upv.es/tlsf claims constant-time memory allocation. But does that solution scale for systems of infinite memory (and infinite word size)?
  • Alexis Wilke
    Alexis Wilke over 2 years
    @DanielSpiewak That won't work unless you can somehow update all the pointers you ever returned with malloc(). Moving things around is something the kernel does. On older computer systems that didn't have an MMU, that was something they would some time do. The old Mac/OS could do that with code on 68k processors when your block was small enough (under 64kb of code) because it could use the PC to access all the functions.