Why pickle eat memory?

10,916

Pickle consume a lot of RAM, see explanations here : http://www.shocksolution.com/2010/01/storing-large-numpy-arrays-on-disk-python-pickle-vs-hdf5adsf/

Why does Pickle consume so much more memory? The reason is that HDF is a binary data pipe, while Pickle is an object serialization protocol. Pickle actually consists of a simple virtual machine (VM) that translates an object into a series of opcodes and writes them to disk. To unpickle something, the VM reads and interprets the opcodes and reconstructs an object. The downside of this approach is that the VM has to construct a complete copy of the object in memory before it writes it to disk.

Pickle is great for small use cases or testing because in most case the memory consumption doesn't matter a lot.

For intensive work where you have to dump and load a lot of files and/or big files you should consider using another way to store your data (ex.: hdf, wrote your own serialize/deserialize methods for your object, ...)

Share:
10,916
Gill Bates
Author by

Gill Bates

Updated on June 25, 2022

Comments

  • Gill Bates
    Gill Bates about 2 years

    I trying to deal with writing huge amount of pickled data to disk by small pieces. Here is the example code:

    from cPickle import *
    from gc import collect
    
    PATH = r'd:\test.dat'
    @profile
    def func(item):
        for e in item:
            f = open(PATH, 'a', 0)
            f.write(dumps(e))
            f.flush()
            f.close()
            del f
            collect()
    
    if __name__ == '__main__':
        k = [x for x in xrange(9999)]
        func(k)
    

    open() and close() placed inside loop to exclude possible causes of accumulation of data in memory.

    To illustrate problem I attach results of memory profiling gained with Python 3d party module memory_profiler:

       Line #    Mem usage  Increment   Line Contents
    ==============================================
        14                           @profile
        15      9.02 MB    0.00 MB   def func(item):
        16      9.02 MB    0.00 MB       path= r'd:\test.dat'
        17
        18     10.88 MB    1.86 MB       for e in item:
        19     10.88 MB    0.00 MB           f = open(path, 'a', 0)
        20     10.88 MB    0.00 MB           f.write(dumps(e))
        21     10.88 MB    0.00 MB           f.flush()
        22     10.88 MB    0.00 MB           f.close()
        23     10.88 MB    0.00 MB           del f
        24                                   collect()
    

    During execution of the loop strange memory usage growth occurs. How it can be eliminated? Any thoughts?

    When amount of input data increases volume of this additional data can grow to size much greater then input (upd: in real task i get 300+Mb)

    And more wide question - which ways exist to properly work with big amounts of IO data in Python?

    upd: I rewrote the code leaving only the loop body to see when growth happens specifically, and here the results:

    Line #    Mem usage  Increment   Line Contents
    ==============================================
        14                           @profile
        15      9.00 MB    0.00 MB   def func(item):
        16      9.00 MB    0.00 MB       path= r'd:\test.dat'
        17
        18                               #for e in item:
        19      9.02 MB    0.02 MB       f = open(path, 'a', 0)
        20      9.23 MB    0.21 MB       d = dumps(item)
        21      9.23 MB    0.00 MB       f.write(d)
        22      9.23 MB    0.00 MB       f.flush()
        23      9.23 MB    0.00 MB       f.close()
        24      9.23 MB    0.00 MB       del f
        25      9.23 MB    0.00 MB       collect()
    

    It seems like dumps() eats memory. (While I actually thought it will be write())

  • Tushar Seth
    Tushar Seth over 4 years
    does it loads the data in CPU memory or GPU memory? , will it release on its own instantaneously after it's dumped to the file? What I have seen is, that it fills up the GPU memory and doesn't release the memory even after it's dumped
  • A Merii
    A Merii over 4 years
    @TusharSeth I think I am facing the same problem as highlighted by the question that I asked today. Did you manage to find a solution to this problem?