HDF5 taking more space than CSV?
Copy of my answer from the issue: https://github.com/pydata/pandas/issues/3651
Your sample is really too small. HDF5 has a fair amount of overhead with really small sizes (even 300k entries is on the smaller side). The following is with no compression on either side. Floats are really more efficiently represented in binary (that as a text representation).
In addition, HDF5 is row based. You get MUCH efficiency by having tables that are not too wide but are fairly long. (Hence your example is not very efficient in HDF5 at all, store it transposed in this case)
I routinely have tables that are 10M+ rows and query times can be in the ms. Even the below example is small. Having 10+GB files is quite common (not to mention the astronomy guys who 10GB+ is a few seconds!)
-rw-rw-r-- 1 jreback users 203200986 May 19 20:58 test.csv
-rw-rw-r-- 1 jreback users 88007312 May 19 20:59 test.h5
In [1]: df = DataFrame(randn(1000000,10))
In [9]: df
Out[9]:
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1000000 entries, 0 to 999999
Data columns (total 10 columns):
0 1000000 non-null values
1 1000000 non-null values
2 1000000 non-null values
3 1000000 non-null values
4 1000000 non-null values
5 1000000 non-null values
6 1000000 non-null values
7 1000000 non-null values
8 1000000 non-null values
9 1000000 non-null values
dtypes: float64(10)
In [5]: %timeit df.to_csv('test.csv',mode='w')
1 loops, best of 3: 12.7 s per loop
In [6]: %timeit df.to_hdf('test.h5','df',mode='w')
1 loops, best of 3: 825 ms per loop
In [7]: %timeit pd.read_csv('test.csv',index_col=0)
1 loops, best of 3: 2.35 s per loop
In [8]: %timeit pd.read_hdf('test.h5','df')
10 loops, best of 3: 38 ms per loop
I really wouldn't worry about the size (I suspect you are not, but are merely interested, which is fine). The point of HDF5 is that disk is cheap, cpu is cheap, but you can't have everything in memory at once so we optimize by using chunking
Amelio Vazquez-Reina
I'm passionate about people, technology and research. Some of my favorite quotes: "Far better an approximate answer to the right question than an exact answer to the wrong question" -- J. Tukey, 1962. "Your title makes you a manager, your people make you a leader" -- Donna Dubinsky, quoted in "Trillion Dollar Coach", 2019.
Updated on July 25, 2022Comments
-
Amelio Vazquez-Reina almost 2 years
Consider the following example:
Prepare the data:
import string import random import pandas as pd matrix = np.random.random((100, 3000)) my_cols = [random.choice(string.ascii_uppercase) for x in range(matrix.shape[1])] mydf = pd.DataFrame(matrix, columns=my_cols) mydf['something'] = 'hello_world'
Set the highest compression possible for HDF5:
store = pd.HDFStore('myfile.h5',complevel=9, complib='bzip2') store['mydf'] = mydf store.close()
Save also to CSV:
mydf.to_csv('myfile.csv', sep=':')
The result is:
myfile.csv
is 5.6 MB bigmyfile.h5
is 11 MB big
The difference grows bigger as the datasets get larger.
I have tried with other compression methods and levels. Is this a bug? (I am using Pandas 0.11 and the latest stable version of HDF5 and Python).