How to know bytes size of python object like arrays and dictionaries? - The simple way
Solution 1
There's:
>>> import sys
>>> sys.getsizeof([1,2, 3])
96
>>> a = []
>>> sys.getsizeof(a)
72
>>> a = [1]
>>> sys.getsizeof(a)
80
But I wouldn't say it's that reliable, as Python has overhead for each object, and there are objects that contain nothing but references to other objects, so it's not quite the same as in C and other languages.
Have a read of the docs on sys.getsizeof and go from there I guess.
Solution 2
None of the answers here are truly generic.
The following solution will work with any type of object recursively, without the need for an expensive recursive implementation:
import gc
import sys
def get_obj_size(obj):
marked = {id(obj)}
obj_q = [obj]
sz = 0
while obj_q:
sz += sum(map(sys.getsizeof, obj_q))
# Lookup all the object referred to by the object in obj_q.
# See: https://docs.python.org/3.7/library/gc.html#gc.get_referents
all_refr = ((id(o), o) for o in gc.get_referents(*obj_q))
# Filter object that are already marked.
# Using dict notation will prevent repeated objects.
new_refr = {o_id: o for o_id, o in all_refr if o_id not in marked and not isinstance(o, type)}
# The new obj_q will be the ones that were not marked,
# and we will update marked with their ids so we will
# not traverse them again.
obj_q = new_refr.values()
marked.update(new_refr.keys())
return sz
For example:
>>> import numpy as np
>>> x = np.random.rand(1024).astype(np.float64)
>>> y = np.random.rand(1024).astype(np.float64)
>>> a = {'x': x, 'y': y}
>>> get_obj_size(a)
16816
See my repository for more information, or simply install my package (objsize):
$ pip install objsize
Then:
>>> from objsize import get_deep_size
>>> get_deep_size(a)
16816
Solution 3
a bit late to the party but an easy way to get size of dict is to pickle it first.
Using sys.getsizeof on python object (including dictionary) may not be exact since it does not count referenced objects.
The way to handle it is to serialize it into a string and use sys.getsizeof on the string. Result will be much closer to what you want.
import cPickle
mydict = {'key1':'some long string, 'key2':[some, list], 'key3': whatever other data}
doing sys.getsizeof(mydict) is not exact so, pickle it first
mydict_as_string = cPickle.dumps(mydict)
now we can know how much space it takes by
print sys.getsizeof(mydict_as_string)
Solution 4
Use this recipe , taken from here:
http://code.activestate.com/recipes/577504-compute-memory-footprint-of-an-object-and-its-cont/
from __future__ import print_function
from sys import getsizeof, stderr
from itertools import chain
from collections import deque
try:
from reprlib import repr
except ImportError:
pass
def total_size(o, handlers={}, verbose=False):
""" Returns the approximate memory footprint an object and all of its contents.
Automatically finds the contents of the following builtin containers and
their subclasses: tuple, list, deque, dict, set and frozenset.
To search other containers, add handlers to iterate over their contents:
handlers = {SomeContainerClass: iter,
OtherContainerClass: OtherContainerClass.get_elements}
"""
dict_handler = lambda d: chain.from_iterable(d.items())
all_handlers = {tuple: iter,
list: iter,
deque: iter,
dict: dict_handler,
set: iter,
frozenset: iter,
}
all_handlers.update(handlers) # user handlers take precedence
seen = set() # track which object id's have already been seen
default_size = getsizeof(0) # estimate sizeof object without __sizeof__
def sizeof(o):
if id(o) in seen: # do not double count the same object
return 0
seen.add(id(o))
s = getsizeof(o, default_size)
if verbose:
print(s, type(o), repr(o), file=stderr)
for typ, handler in all_handlers.items():
if isinstance(o, typ):
s += sum(map(sizeof, handler(o)))
break
return s
return sizeof(o)
##### Example call #####
if __name__ == '__main__':
d = dict(a=1, b=2, c=3, d=[4,5,6,7], e='a string of chars')
print(total_size(d, verbose=True))
Solution 5
In case you want to measure the size of the body that you will send via eg. HTTP as JSON, could you convert it to str first and then count its length? After all you will send it as text. So this:
>>> import json
>>> import sys
>>> my_dict = {"var1": 12345, "var2": "abcde", "var3": 23.43232, "var4": True, "var5": None}
>>> a = json.dumps(my_dict)
>>> len(a)
78
>>> sys.getsizeof(my_dict)
232
>>> sys.getsizeof(a)
127
The total number of characters in the converted object is 78, so in computers where 1 character = 1 byte, then 78 bytes would be reasonable answer and seems more accurate than using sys.getsizeof
.
crandrades
Updated on September 30, 2021Comments
-
crandrades over 2 years
I was looking for a easy way to know bytes size of arrays and dictionaries object, like
[ [1,2,3], [4,5,6] ] or { 1:{2:2} }
Many topics say to use pylab, for example:
from pylab import * A = array( [ [1,2,3], [4,5,6] ] ) A.nbytes 24
But, what about dictionaries? I saw lot of answers proposing to use pysize or heapy. An easy answer is given by Torsten Marek in this link: Which Python memory profiler is recommended?, but I haven't a clear interpretation about the output because the number of bytes didn't match.
Pysize seems to be more complicated and I haven't a clear idea about how to use it yet.
Given the simplicity of size calculation that I want to perform (no classes nor complex structures), any idea about a easy way to get a approximate estimation of memory usage of this kind of objects?
Kind regards.
-
crandrades over 11 yearsI tried that way, but when you try to get size of a list of lists, you get only the parent list size and not the total with the nested lists. I don't know if I write code to do the recursion, I'll get the real memory usage.
-
Jon Clements over 11 years@user1847706 at the end of the entry I linked you to in the docs, there's See recursive sizeof recipe for an example of using getsizeof() recursively to find the size of containers and all their contents.
-
crandrades over 11 yearsThanks for your answer. Now, I'm trying to add a handler to calculate memory usage for a user defined class.
-
jbg over 10 yearsThis won’t tell you the size of the dict; it will tell you the size of the pickle representation of the dict, which will be larger (potentially by a considerable amount) than the in-memory size of the dict.
-
Denis Kanygin over 10 years@JasperBryant-Greene that's the point. Using sys.getsizeof on python object (including dictionary) may not be exact since it does not count referenced objects. Serializing it and then getting size is not exact but will be closer to what you want. Think of of it as an approximation.
-
jbg over 10 yearsSure, but the question asks for "a approximate estimation of memory usage of this kind of objects". I think this doesn't even qualify as an approximate estimation of memory usage -- the pickled size will typically be much larger.
-
CoatedMoose about 10 yearsThis can be a very rough approximation since it almost completely ignores the overhead of the structure. For example the size of an empty dict is 280 on my machine, while the size of the dict pickled to a string is 43. The less bulky the data stored, the rougher the approximation is.
-
MarkHu about 8 yearsThis doesn't seem any better than
print len(json.dumps(my_dict))
-
std''OrgnlDave over 7 yearsWhat about pickling and then de-pickling?
-
technomage over 5 yearsIf found that tuples/namedtuples are much smaller in memory than pickled, with the reverse true for dicts.
-
vangheem over 4 yearsThis is the best answer if you just need an estimate. json is slow as are the other options. pickle is about 7x faster than any other ways of getting the size of an object in my tests.
-
Qinsheng Zhang almost 4 yearsThis answer definitely needs more attention! Clean way to calculate the memory usage. Thanks.
-
Liran Funaro almost 4 yearsUnfurtentlly, this answer is wrong. It only accounts for the root object size. If the list were to have internal objects (as in OP's example), it will report the wrong object's in-memory size.
-
Liran Funaro almost 4 yearsUnfurtentlly, this answer is wrong. It calculates the serialized size of the objects. This have nothing to do with the in-memory representation size of the object. In most cases, this would be significantly larger due to the robust encoding of the pickle mechanism.
-
0 _ about 3 yearsThe method
numpy.ndarray.nbytes
returns the number of bytes consumed by the elements of an array, without the memory consumed by other attributes of thenumpy
array. For this reason, the value of the attributenbytes
is slightly smaller than the value returned bysys.getsizeof
. -
Jamie almost 3 yearsJust a warning to those who try this with PyTorch: You will need to access the storage attribute on each tensor to get the correct size. See; stackoverflow.com/questions/54361763/…
-
Liran Funaro almost 3 years@Jamie Did you verified that this solution does not work in the linked example? I don't see why this solution wouldn't recurse to access the storage of the PyTorch object.
-
Jamie almost 3 yearsYes, confirmed.
get_obj_size(torch.rand(200, 200)) == get_obj_size(torch.rand(200))
isTrue
. (Both return 64 bytes.)