Pandas ParserError EOF character when reading multiple csv files to HDF5
Solution 1
I had a similar problem. The line listed with the 'EOF inside string' had a string that contained within it a single quote mark. When I added the option quoting=csv.QUOTE_NONE it fixed my problem.
For example:
import csv
df = pd.read_csv(csvfile, header = None, delimiter="\t", quoting=csv.QUOTE_NONE, encoding='utf-8')
Solution 2
I have the same problem, and after adding these two params to my code, the problem is gone.
read_csv (...
quoting=3
,error_bad_lines=False
)
Solution 3
I realize this is an old question, but I wanted to share some more details on the root cause of this error and why the solution from @Selah works.
From the csv.py
docstring:
* quoting - controls when quotes should be generated by the writer.
It can take on any of the following module constants:
csv.QUOTE_MINIMAL means only when required, for example, when a
field contains either the quotechar or the delimiter
csv.QUOTE_ALL means that quotes are always placed around fields.
csv.QUOTE_NONNUMERIC means that quotes are always placed around
fields which do not parse as integers or floating point
numbers.
csv.QUOTE_NONE means that quotes are never placed around fields.
csv.QUOTE_MINIMAL
is the default value and "
is the default quotechar
. If somewhere in your csv file you have a quotechar it will be parsed as a string until another occurrence of the quotechar. If your file has odd number of quotechars the last one will not be closed before reaching the EOF
(end of file). Also be aware that anything between the quotechars will be parsed as a single string. Even if there are many line breaks (expected to be parsed as separate rows) it all goes into a single field of the table. So the line number that you get in the error can be misleading. To illustrate with an example consider this:
In[4]: import pandas as pd
...: from io import StringIO
...: test_csv = '''a,b,c
...: "d,e,f
...: g,h,i
...: "m,n,o
...: p,q,r
...: s,t,u
...: '''
...:
In[5]: test = StringIO(test_csv)
In[6]: pd.read_csv(test)
Out[6]:
a b c
0 d,e,f\ng,h,i\nm n o
1 p q r
2 s t u
In[7]: test_csv_2 = '''a,b,c
...: "d,e,f
...: g,h,i
...: "m,n,o
...: "p,q,r
...: s,t,u
...: '''
...: test_2 = StringIO(test_csv_2)
...:
In[8]: pd.read_csv(test_2)
Traceback (most recent call last):
...
...
pandas.errors.ParserError: Error tokenizing data. C error: EOF inside string starting at line 2
The first string has 2 (even) quotechars. So each quotechar is closed and the csv is parsed without an error, although probably not what we expected. The other string has 3 (odd) quotechars. The last one is not closed and the EOF is reached hence the error. But line 2 that we get in the error message is misleading. We would expect 4, but since everything between first and second quotechar is parsed as a string our "p,q,r
line is actually second.
Solution 4
Make your inner loop like this will allow you to detect the 'bad' file (and further investigate)
from pandas.io import parser
def to_hdf():
.....
# Reading csv files from list_files function
for f in list_files():
# Creating reader in chunks -- reduces memory load
try:
reader = pd.read_csv(f, chunksize=50000)
# Looping over chunks and storing them in store file, node name 'ta_data'
for chunk in reader:
chunk.to_hdf(store, 'ta_data', table=True)
except (parser.CParserError) as detail:
print f, detail
Solution 5
The solution is to use the parameter engine=’python’ in the read_csv function. The Pandas CSV parser can use two different “engines” to parse a CSV file – Python or C (which is also the default).
pandas.read_csv(filepath, sep=',', delimiter=None,
header='infer', names=None,
index_col=None, usecols=None, squeeze=False,
..., engine=None, ...)
The Python engine is described to be “slower, but is more feature complete” in the Pandas documentation.
engine : {‘c’, ‘python’}
Matthijs
Updated on February 18, 2022Comments
-
Matthijs about 2 years
Using Python3, Pandas 0.12
I'm trying to write multiple csv files (total size is 7.9 GB) to a HDF5 store to process later onwards. The csv files contain around a million of rows each, 15 columns and data types are mostly strings, but some floats. However when I'm trying to read the csv files I get the following error:
Traceback (most recent call last): File "filter-1.py", line 38, in <module> to_hdf() File "filter-1.py", line 31, in to_hdf for chunk in reader: File "C:\Python33\lib\site-packages\pandas\io\parsers.py", line 578, in __iter__ yield self.read(self.chunksize) File "C:\Python33\lib\site-packages\pandas\io\parsers.py", line 608, in read ret = self._engine.read(nrows) File "C:\Python33\lib\site-packages\pandas\io\parsers.py", line 1028, in read data = self._reader.read(nrows) File "parser.pyx", line 706, in pandas.parser.TextReader.read (pandas\parser.c:6745) File "parser.pyx", line 740, in pandas.parser.TextReader._read_low_memory (pandas\parser.c:7146) File "parser.pyx", line 781, in pandas.parser.TextReader._read_rows (pandas\parser.c:7568) File "parser.pyx", line 768, in pandas.parser.TextReader._tokenize_rows (pandas\parser.c:7451) File "parser.pyx", line 1661, in pandas.parser.raise_parser_error (pandas\parser.c:18744) pandas.parser.CParserError: Error tokenizing data. C error: EOF inside string starting at line 754991 Closing remaining open files: ta_store.h5... done
Edit:
I managed to find a file that produced this problem. I think it's reading an EOF character. However I have no clue to overcome this problem. Given the large size of the combined files I think it's too cumbersome to check each single character in each string. (Even then I would still not be sure what to do.) As far as I checked, there are no strange characters in the csv files that could raise the error. I also tried passing
error_bad_lines=False
topd.read_csv()
, but the error persists.My code is the following:
# -*- coding: utf-8 -*- import pandas as pd import os from glob import glob def list_files(path=os.getcwd()): ''' List all files in specified path ''' list_of_files = [f for f in glob('2013-06*.csv')] return list_of_files def to_hdf(): """ Function that reads multiple csv files to HDF5 Store """ # Defining path name path = 'ta_store.h5' # If path exists delete it such that a new instance can be created if os.path.exists(path): os.remove(path) # Creating HDF5 Store store = pd.HDFStore(path) # Reading csv files from list_files function for f in list_files(): # Creating reader in chunks -- reduces memory load reader = pd.read_csv(f, chunksize=50000) # Looping over chunks and storing them in store file, node name 'ta_data' for chunk in reader: chunk.to_hdf(store, 'ta_data', mode='w', table=True) # Return store return store.select('ta_data') return 'Finished reading to HDF5 Store, continuing processing data.' to_hdf()
Edit
If I go into the CSV file that raises the CParserError EOF... and manually delete all rows after the line that is causing the problem, the csv file is read properly. However all I'm deleting are blank rows anyway. The weird thing is that when I manually correct the erroneous csv files, they are loaded fine into the store individually. But when I again use a list of multiple files the 'false' files still return me errors.
-
Matthijs almost 11 yearsHi Jeff, thanks! It works and I did find out what files/lines are causing the problem. Now I can try to 'correct' those files manually, but I would rather have a more programmatic solution. Thus I need to understand what is actually the error I'm being returned and what kind of code do I write that automatically takes care of that problem.
-
Jeff almost 11 yearsyou could try specifying a
lineterminator
(which is essentially\n
on linux (or\n\r
on windows I think). And at worse you get a bad line (as the invalid terminator is put in the next line).....but need to see what's wrong in the first place: pandas.pydata.org/pandas-docs/dev/io.html#csv-text-files -
Matthijs almost 11 yearsThe weird thing is that when I manually correct the erroneous csv files, they are loaded fine into the store individually. But when I again use
glob
to read a bunch of files these files still return me errors. -
Jeff almost 11 yearsthat is weird about
glob
; I personally use something likefor f in os.listdir(dir); if is_ok(f): process_file(f)
, whereis_ok
is a function to accept/reject the filename (or could be other criteria or are.search
-
Matthijs almost 11 yearsThanks Jeff, I will try something like that. Also I could not specify the lineterminator to be equal to
\n\r
, receiving an error message that only 1-length lineterminators can be used (as also raised here github.com/pydata/pandas/issues/3501). -
Jeff almost 11 yearsthat's by definition; you could simply substitute the line endnding in your file to some other character; your file is corrupt somehow
-
Yulong over 9 yearson a side note, I think the first line of code is
from pandas import parser
instead offrom pandas.io import parser
? As the latter cannot work with my pandas 0.15.0 -
DACW almost 7 yearsthis is an optimal solution
-
Ayush Vatsyayan about 6 yearsThis doesn't work - even after upgrading to pandas-0.22.0 I'm getting the same error
-
Ayush Vatsyayan about 6 yearsThis works like charm. There was an error in one line. After executing with above option I got the following message
Skipping line 192: expected 5 fields, saw 74
-
Vaibhav Magon over 2 yearsAwesome! This works perfectly
-
Vikranth about 2 yearsThis worked for me, but it would be great if anyone could explain why this works
-
Vikranth about 2 yearsThis one made me skip too many rows while engine="python",error_bad_lines=False made me skip only one