Direct buffer memory

15,690

Solution 1

The actual memory buffers managed by DirectByteBuffer are not allocated in the heap. They are allocated using Unsafe.allocateMemory which allocates "native memory". So increasing or decreasing the heap size won't help.

When the GC detects that a DirectByteBuffer is no longer referenced, a Cleaner is used to free the native memory. However, this happens in the post-collection phase, so if the demand for / turnover of direct buffers is too great, it is possible that the collector won't be able to keep up. If that happens, you get an OOME.


What can you do about it?

AFAIK, the only thing you can do is to force more frequent garbage collections. But that can have performance implications. And I don't think it is a guaranteed solution.

The real solution is to take a different approach.

You see that you are serving up lots of very large files from a webserver, and the stacktrace shows that you are using Files::readAllBytes to load them into memory, and then (presumably) send them using a single write. Presumably you are doing this to get the fastest download times that are possible. This is a mistake:

  • You are tying down a lot of memory (multiples of and stressing the garbage collector. This is leading to more GC runs and occasional OOMEs. It is also potentially affecting other applications on your server in various ways.

  • The bottleneck for transferring the file is probably not the process of reading data from disk. (The real bottleneck is typically sending the data via a TCP stream over the network, or writing it to the file system on the client end.)

  • If you are reading a large file sequentially, a modern Linux OS will typically use read ahead a number of disk blocks and hold the blocks in the (OS) buffer cache. This will reduce the latency on read syscalls made by your application.

So, for files of this size, a better idea is to stream the file. Either allocate a large (a few megabytes) ByteBuffer and read / write in a loop, or use a copy the file using Files::copy(...) (javadoc) which should take care of the buffering for you.

(There is also the option of using something that maps to a Linux sendfile syscall. This copies data from one file descriptor to another without writing it into a user-space buffer.)

Solution 2

You could also try increasing the size of the buffer used for DirectByteBuffer with the JVM option -XX:MaxDirectMemorySize. The Java docs are not very detailed about this parameter, but according to this page it will by default be set to 64MB unless you have specified the -Xmx flag. So if you haven't set this flag the allocated buffer may be too small. Or if you have a very large file and have set -Xmx, the derived 2GB may be too small and you could still benefit from setting a larger buffer manually.

All in all, the better approach is probably to stream the file as suggested by Stephen C.

Share:
15,690
Stephan Stahlmann
Author by

Stephan Stahlmann

Updated on June 04, 2022

Comments

  • Stephan Stahlmann
    Stephan Stahlmann almost 2 years

    I need to return a rather large file from a web request. The file is around 670mb in size. For the most part this will work fine but after some time the following error will be thrown:

    java.lang.OutOfMemoryError: Direct buffer memory
        at java.nio.Bits.reserveMemory(Bits.java:694) ~[na:1.8.0_162]
        at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) ~[na:1.8.0_162]
        at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) ~[na:1.8.0_162]
        at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:241) ~[na:1.8.0_162]
        at sun.nio.ch.IOUtil.read(IOUtil.java:195) ~[na:1.8.0_162]
        at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:159) ~[na:1.8.0_162]
        at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:65) ~[na:1.8.0_162]
        at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:109) ~[na:1.8.0_162]
        at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103) ~[na:1.8.0_162]
        at java.nio.file.Files.read(Files.java:3105) ~[na:1.8.0_162]
        at java.nio.file.Files.readAllBytes(Files.java:3158) ~[na:1.8.0_162]
    

    I have set the heap size to 4096mb which I think should be large enough to handle this kinds of files. Furthermore when this error occured I took a heapdump with jmap to analyze the current state. I found two rather large byte[], which should be the file I want to return. But the heap is only around 1.6gb in size and not near the configured 4gb it can be.

    According to some other answer (https://stackoverflow.com/a/39984276/5126654) in a similar question I tried running manual gc before returning this file. The problem still occured but now only spardic. The problem occured after some time, but then when I tired running the same request again it seems like the garbage collection took care of whatever caused the problem, but this is not sufficient since the problem apparently still can occur. Is there some other way to avoid this memory problem?