Read and Write CSV files including unicode with Python 2.7

121,660

Solution 1

Another alternative:

Use the code from the unicodecsv package ...

https://pypi.python.org/pypi/unicodecsv/

>>> import unicodecsv as csv
>>> from io import BytesIO
>>> f = BytesIO()
>>> w = csv.writer(f, encoding='utf-8')
>>> _ = w.writerow((u'é', u'ñ'))
>>> _ = f.seek(0)
>>> r = csv.reader(f, encoding='utf-8')
>>> next(r) == [u'é', u'ñ']
True

This module is API compatible with the STDLIB csv module.

Solution 2

Make sure you encode and decode as appropriate.

This example will roundtrip some example text in utf-8 to a csv file and back out to demonstrate:

# -*- coding: utf-8 -*-
import csv

tests={'German': [u'Straße',u'auslösen',u'zerstören'], 
       'French': [u'français',u'américaine',u'épais'], 
       'Chinese': [u'中國的',u'英語',u'美國人']}

with open('/tmp/utf.csv','w') as fout:
    writer=csv.writer(fout)    
    writer.writerows([tests.keys()])
    for row in zip(*tests.values()):
        row=[s.encode('utf-8') for s in row]
        writer.writerows([row])

with open('/tmp/utf.csv','r') as fin:
    reader=csv.reader(fin)
    for row in reader:
        temp=list(row)
        fmt=u'{:<15}'*len(temp)
        print fmt.format(*[s.decode('utf-8') for s in temp])

Prints:

German         Chinese        French         
Straße         中國的            français       
auslösen       英語             américaine     
zerstören      美國人            épais  

Solution 3

There is an example at the end of the csv module documentation that demonstrates how to deal with Unicode. Below is copied directly from that example. Note that the strings read or written will be Unicode strings. Don't pass a byte string to UnicodeWriter.writerows, for example.

import csv,codecs,cStringIO

class UTF8Recoder:
    def __init__(self, f, encoding):
        self.reader = codecs.getreader(encoding)(f)
    def __iter__(self):
        return self
    def next(self):
        return self.reader.next().encode("utf-8")

class UnicodeReader:
    def __init__(self, f, dialect=csv.excel, encoding="utf-8-sig", **kwds):
        f = UTF8Recoder(f, encoding)
        self.reader = csv.reader(f, dialect=dialect, **kwds)
    def next(self):
        '''next() -> unicode
        This function reads and returns the next line as a Unicode string.
        '''
        row = self.reader.next()
        return [unicode(s, "utf-8") for s in row]
    def __iter__(self):
        return self

class UnicodeWriter:
    def __init__(self, f, dialect=csv.excel, encoding="utf-8-sig", **kwds):
        self.queue = cStringIO.StringIO()
        self.writer = csv.writer(self.queue, dialect=dialect, **kwds)
        self.stream = f
        self.encoder = codecs.getincrementalencoder(encoding)()
    def writerow(self, row):
        '''writerow(unicode) -> None
        This function takes a Unicode string and encodes it to the output.
        '''
        self.writer.writerow([s.encode("utf-8") for s in row])
        data = self.queue.getvalue()
        data = data.decode("utf-8")
        data = self.encoder.encode(data)
        self.stream.write(data)
        self.queue.truncate(0)

    def writerows(self, rows):
        for row in rows:
            self.writerow(row)

with open('xxx.csv','rb') as fin, open('lll.csv','wb') as fout:
    reader = UnicodeReader(fin)
    writer = UnicodeWriter(fout,quoting=csv.QUOTE_ALL)
    for line in reader:
        writer.writerow(line)

Input (UTF-8 encoded):

American,美国人
French,法国人
German,德国人

Output:

"American","美国人"
"French","法国人"
"German","德国人"

Solution 4

Because str in python2 is bytes actually. So if want to write unicode to csv, you must encode unicode to str using utf-8 encoding.

def py2_unicode_to_str(u):
    # unicode is only exist in python2
    assert isinstance(u, unicode)
    return u.encode('utf-8')

Use class csv.DictWriter(csvfile, fieldnames, restval='', extrasaction='raise', dialect='excel', *args, **kwds):

  • py2
    • The csvfile: open(fp, 'w')
    • pass key and value in bytes which are encoded with utf-8
      • writer.writerow({py2_unicode_to_str(k): py2_unicode_to_str(v) for k,v in row.items()})
  • py3
    • The csvfile: open(fp, 'w')
    • pass normal dict contains str as row to writer.writerow(row)

Finally code

import sys

is_py2 = sys.version_info[0] == 2

def py2_unicode_to_str(u):
    # unicode is only exist in python2
    assert isinstance(u, unicode)
    return u.encode('utf-8')

with open('file.csv', 'w') as f:
    if is_py2:
        data = {u'Python中国': u'Python中国', u'Python中国2': u'Python中国2'}

        # just one more line to handle this
        data = {py2_unicode_to_str(k): py2_unicode_to_str(v) for k, v in data.items()}

        fields = list(data[0])
        writer = csv.DictWriter(f, fieldnames=fields)

        for row in data:
            writer.writerow(row)
    else:
        data = {'Python中国': 'Python中国', 'Python中国2': 'Python中国2'}

        fields = list(data[0])
        writer = csv.DictWriter(f, fieldnames=fields)

        for row in data:
            writer.writerow(row)

Conclusion

In python3, just use the unicode str.

In python2, use unicode handle text, use str when I/O occurs.

Solution 5

I couldn't respond to Mark above, but I just made one modification which fixed the error which was caused if data in the cells was not unicode, i.e. float or int data. I replaced this line into the UnicodeWriter function: "self.writer.writerow([s.encode("utf-8") if type(s)==types.UnicodeType else s for s in row])" so that it became:

class UnicodeWriter:
    def __init__(self, f, dialect=csv.excel, encoding="utf-8-sig", **kwds):
       self.queue = cStringIO.StringIO()
        self.writer = csv.writer(self.queue, dialect=dialect, **kwds)
        self.stream = f
        self.encoder = codecs.getincrementalencoder(encoding)()
    def writerow(self, row):
        '''writerow(unicode) -> None
        This function takes a Unicode string and encodes it to the output.
        '''
        self.writer.writerow([s.encode("utf-8") if type(s)==types.UnicodeType else s for s in row])
        data = self.queue.getvalue()
        data = data.decode("utf-8")
        data = self.encoder.encode(data)
        self.stream.write(data)
        self.queue.truncate(0)

    def writerows(self, rows):
        for row in rows:
            self.writerow(row)

You will also need to "import types".

Share:
121,660
Ruxuan  Ouyang
Author by

Ruxuan Ouyang

Updated on July 05, 2022

Comments

  • Ruxuan  Ouyang
    Ruxuan Ouyang almost 2 years

    I am new to Python, and I have a question about how to use Python to read and write CSV files. My file contains like Germany, French, etc. According to my code, the files can be read correctly in Python, but when I write it into a new CSV file, the unicode becomes some strange characters.

    The data is like:
    enter image description here

    And my code is:

    import csv
    
    f=open('xxx.csv','rb')
    reader=csv.reader(f)
    
    wt=open('lll.csv','wb')
    writer=csv.writer(wt,quoting=csv.QUOTE_ALL)
    
    wt.close()
    f.close()
    

    And the result is like:
    enter image description here

    What should I do to solve the problem?

  • Ahsan
    Ahsan over 9 years
    I am still getting UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1: ordinal not in range(128) on self.writer.writerow([s.encode("utf-8") for s in row]) this row. Please suggest?
  • Mark Tolonen
    Mark Tolonen over 9 years
    @Ahsan, that row is encoding but the error is UnicodeDecodeError. It implies that s was not Unicode to begin with, so Python 2.X is decoding it to Unicode using the default ascii codec. Make sure you are passing Unicode strings to UnicodeWriter.
  • Ahsan
    Ahsan over 9 years
    Yep, this exactly was the reason. I managed to solve this by this link. stackoverflow.com/a/22734072/534790 Thanks! Can you please update the answer in case someone else face this same issue?
  • keybits
    keybits over 9 years
    Thanks! This is the simple way to do it.
  • Subir
    Subir almost 9 years
    Basically, as long as everything is encoded as Unicode, it works just fine. Thanks for driving the point home without a huge wall of code!
  • doncherry
    doncherry over 7 years
    Thank you so much, this is really helpul! Let me see if I understood the way it works: Even if you store your strings in Python like u'Straße', they’re still (escaped as) ASCII internally (u'Stra\xdfe'), so that you have to translate/encode everything into UTF-8 (escaped strings) ('Stra\xc3\x9fe') before writing them to a UTF-8 encoded file?
  • dawg
    dawg over 7 years
    @doncherry: No, the strings are internally represented as they are encoded. If you see them as escaped ascii, that is the representation at the time or the way you need to input them.
  • RandomEli
    RandomEli over 5 years
    This lib is amazing.