Why does C# memory stream reserve so much memory?

10,914

Solution 1

Because this is the algorithm for how it expands its capacity.

public override void Write(byte[] buffer, int offset, int count) {

    //... Removed Error checking for example

    int i = _position + count;
    // Check for overflow
    if (i < 0)
        throw new IOException(Environment.GetResourceString("IO.IO_StreamTooLong"));

    if (i > _length) {
        bool mustZero = _position > _length;
        if (i > _capacity) {
            bool allocatedNewArray = EnsureCapacity(i);
            if (allocatedNewArray)
                mustZero = false;
        }
        if (mustZero)
            Array.Clear(_buffer, _length, i - _length);
        _length = i;
    }

    //... 
}

private bool EnsureCapacity(int value) {
    // Check for overflow
    if (value < 0)
        throw new IOException(Environment.GetResourceString("IO.IO_StreamTooLong"));
    if (value > _capacity) {
        int newCapacity = value;
        if (newCapacity < 256)
            newCapacity = 256;
        if (newCapacity < _capacity * 2)
            newCapacity = _capacity * 2;
        Capacity = newCapacity;
        return true;
    }
    return false;
}

public virtual int Capacity 
{
    //...

    set {
         //...

        // MemoryStream has this invariant: _origin > 0 => !expandable (see ctors)
        if (_expandable && value != _capacity) {
            if (value > 0) {
                byte[] newBuffer = new byte[value];
                if (_length > 0) Buffer.InternalBlockCopy(_buffer, 0, newBuffer, 0, _length);
                _buffer = newBuffer;
            }
            else {
                _buffer = null;
            }
            _capacity = value;
        }
    }
}

So every time you hit the capacity limit it doubles the size of the capacity. The reason it does this is that Buffer.InternalBlockCopy operation is slow for large arrays so if it had to frequently resize every Write call the performance would drop significantly.

A few things you could do to improve the performance for you is you could set the initial capacity to be at least the size of your compressed array and you could then increase size by a factor smaller than 2.0 to reduce the amount of memory you are using.

const double ResizeFactor = 1.25;

private byte[] Decompress(byte[] data)
{
    using (MemoryStream compressedStream = new MemoryStream(data))
    using (GZipStream zipStream = new GZipStream(compressedStream, CompressionMode.Decompress))
    using (MemoryStream resultStream = new MemoryStream(data.Length * ResizeFactor)) //Set the initial size to be the same as the compressed size + 25%.
    {
        byte[] buffer = new byte[4096];
        int iCount = 0;

        while ((iCount = zipStream.Read(buffer, 0, buffer.Length)) > 0)
        {
            if(resultStream.Capacity < resultStream.Length + iCount)
               resultStream.Capacity = resultStream.Capacity * ResizeFactor; //Resize to 125% instead of 200%

            resultStream.Write(buffer, 0, iCount);
        }
        return resultStream.ToArray();
    }
}

If you wanted to you could do even more fancy algorithms like resizing based on the current compression ratio

const double MinResizeFactor = 1.05;

private byte[] Decompress(byte[] data)
{
    using (MemoryStream compressedStream = new MemoryStream(data))
    using (GZipStream zipStream = new GZipStream(compressedStream, CompressionMode.Decompress))
    using (MemoryStream resultStream = new MemoryStream(data.Length * MinResizeFactor)) //Set the initial size to be the same as the compressed size + the minimum resize factor.
    {
        byte[] buffer = new byte[4096];
        int iCount = 0;

        while ((iCount = zipStream.Read(buffer, 0, buffer.Length)) > 0)
        {
            if(resultStream.Capacity < resultStream.Length + iCount)
            {
               double sizeRatio = ((double)resultStream.Position + iCount) / (compressedStream.Position + 1); //The +1 is to prevent divide by 0 errors, it may not be necessary in practice.

               //Resize to minimum resize factor of the current capacity or the 
               // compressed stream length times the compression ratio + min resize 
               // factor, whichever is larger.
               resultStream.Capacity =  Math.Max(resultStream.Capacity * MinResizeFactor, 
                                                 (sizeRatio + (MinResizeFactor - 1)) * compressedStream.Length);
             }

            resultStream.Write(buffer, 0, iCount);
        }
        return resultStream.ToArray();
    }
}

Solution 2

MemoryStream doubles its internal buffer when it runs out of space. This can lead to 2x waste. I cannot tell why you are seeing more than that. But this basic behavior is expected.

If you don't like this behavior write your own stream that stores its data in smaller chunks (e.g. a List<byte[1024 * 64]>). Such an algorithm would bounds its amount of waste to 64KB.

Solution 3

Looks like you are looking at total amount of allocated memory, not the last call. Since memory stream doubles its size on reallocation it it will grow about twice each time - so total allocated memory would be approximately sum of powers of 2 like:

Sum i=1 k (2i) = 2k+1 -1.

(where k is number of re-allocations like k = 1 + log2 StreamSize

Which is about what you see.

Solution 4

Well, increasing the capacity of the streams means creating a whole new array with the new capacity, and copying the old one over. That's very expensive, and if you did it for each Write, your performance would suffer a lot. So instead, the MemoryStream expands more than necessary. If you want to improve that behaviour and you know the total capacity required, simply use the MemoryStream constructor with the capacity parameter :) You can then use MemoryStream.GetBuffer instead of ToArray too.

You're also seeing the discarded old buffers in the memory profiler (e.g. from 8 MiB to 16 MiB etc.).

Of course, you don't care about having a single consecutive array, so it might be a better idea for you to simply have a memory stream of your own that uses multiple arrays created as needed, in as big chunks as necessary, and then just copy it all at once to the output byte[] (if you even need the byte[] at all - quite likely, that's a design problem).

Share:
10,914
Tim Meyer
Author by

Tim Meyer

I hereby grant any user of Stack Overflow the right to use any original code I posted as an answer under the terms of the MIT License as an alternative to the default Stack Overflow license.

Updated on June 19, 2022

Comments

  • Tim Meyer
    Tim Meyer almost 2 years

    Our software is decompressing certain byte data through a GZipStream, which reads data from a MemoryStream. These data are decompressed in blocks of 4KB and written into another MemoryStream.

    We've realized that the memory the process allocates is much higher than the actual decompressed data.

    Example: A compressed byte array with 2,425,536 bytes gets decompressed to 23,050,718 bytes. The memory profiler we use shows that the Method MemoryStream.set_Capacity(Int32 value) allocated 67,104,936 bytes. That's a factor of 2.9 between reserved and actually written memory.

    Note: MemoryStream.set_Capacity is called from MemoryStream.EnsureCapacity which is itself called from MemoryStream.Write in our function.

    Why does the MemoryStream reserve so much capacity, even though it only appends blocks of 4KB?

    Here is the code snippet which decompresses data:

    private byte[] Decompress(byte[] data)
    {
        using (MemoryStream compressedStream = new MemoryStream(data))
        using (GZipStream zipStream = new GZipStream(compressedStream, CompressionMode.Decompress))
        using (MemoryStream resultStream = new MemoryStream())
        {
            byte[] buffer = new byte[4096];
            int iCount = 0;
    
            while ((iCount = zipStream.Read(buffer, 0, buffer.Length)) > 0)
            {
                resultStream.Write(buffer, 0, iCount);
            }
            return resultStream.ToArray();
        }
    }
    

    Note: If relevant, this is the system configuration:

    • Windows XP 32bit,
    • .NET 3.5
    • Compiled with Visual Studio 2008
    • Adam Houldsworth
      Adam Houldsworth almost 10 years
      If you can store the compression ratio along with the compressed stream, you can best-guess + margin for error on the final size and allocate it once as byte[] and avoid using MemoryStream entirely, then trim the array or cut your losses on the wasted space at the end.
    • Aron
      Aron almost 10 years
      I would caution ANY decompression using MemoryStream. The GC in .net uses the Large Object Heap for any objects larger than 85kB, such as a byte[]. This will quickly fragment your memory and will actually lead to bigger problems that you are currently facing.
    • Tim Meyer
      Tim Meyer almost 10 years
      @AdamHouldsworth @Aron Thanks for the hint guys, the data was compressed using gzip and thus contains the size of the original data in the last four bytes. I extracted this size and simply allocated a byte[] with the exact size which is required. I'm no loner writing into a MemoryStream now.
  • Alexei Levenkov
    Alexei Levenkov almost 10 years
    +1: very good note to use custom chunked stream. Note that even using temporary file stream may be faster than using MemoryStream for large enough operations.
  • Tim Meyer
    Tim Meyer almost 10 years
    Thanks for the math. Just add a definition of what exactly k is, please ;-)
  • Alexei Levenkov
    Alexei Levenkov almost 10 years
    @TimMeyer - added (and fixed sum...)
  • Matthew
    Matthew almost 10 years
    The reason why it doubles your allocation is that allocation is typically slow as you need to allocate memory, then copy the old memory to the new one. The developer that implemented it must have decided speed was more important than efficient use of memory, I agree with this because typically MemoryStream's are short lived, so the memory is reclaimed shortly after it's allocated.
  • usr
    usr almost 10 years
    @Matthew using a chain of buffers beans that no copying at all is necessary. Probably even faster.
  • Scott Chamberlain
    Scott Chamberlain almost 10 years
    @usr Look at the implementation of List, it has the same capacity problem however you will be now copying arrays of byte[1024 * 64] instead of the bytes themselves so the copy operations will be much more infrequent and smaller. You could remove the copy cost via a trade off for slower seeking times by using LinkedList<byte[1024 * 64]>.
  • Tim Meyer
    Tim Meyer almost 10 years
    Accepted this answer as it provides both a detailed answer to my question and additional recommendations on how to save memory. I eventually removed the MemoryStream resultStream completely by extracting the uncompressed data size from the last four bytes of the GZip data (which originates from a GZip file) and creating a byte array with that exact size as a destination, but I would have gone this way if the other way hadn't worked.
  • Mankarse
    Mankarse almost 10 years
    It's worth noting that any resize factor smaller than the golden ratio will lead to the system eventually having enough leftover memory to allocate the new block in the memory used by the previous blocks (assuming the old blocks were in contiguous memory), while any resize factor larger than the golden ratio will ensure that the resize operation will never be able to reuse the memory previously allocated for the buffer. (Assuming no other part of the system is using memory). See this blog post.