PySpark — UnicodeEncodeError: 'ascii' codec can't encode character
Solution 1
https://issues.apache.org/jira/browse/SPARK-11772 talks about this issue and gives a solution that runs:
export PYTHONIOENCODING=utf8
before running pyspark
. I wonder why above works, because sys.getdefaultencoding()
returned utf-8
for me even without it.
How to set sys.stdout encoding in Python 3? also talks about this and gives the following solution for Python 3:
import sys
sys.stdout = open(sys.stdout.fileno(), mode='w', encoding='utf8', buffering=1)
Solution 2
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
This works for me, I am setting the encoding upfront and it is valid throughout the script.
salient
Updated on July 28, 2022Comments
-
salient almost 2 years
Loading a dataframe with foreign characters (åäö) into Spark using
spark.read.csv
, withencoding='utf-8'
and trying to do a simple show().>>> df.show() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/spark/python/pyspark/sql/dataframe.py", line 287, in show print(self._jdf.showString(n, truncate)) UnicodeEncodeError: 'ascii' codec can't encode character u'\ufffd' in position 579: ordinal not in range(128)
I figure this is probably related to Python itself but I cannot understand how any of the tricks that are mentioned here for example can be applied in the context of PySpark and the show()-function.
-
Vicky over 5 yearsYes , it's working.. export it export PYTHONIOENCODING=utf8 before spark submit
-
Hardik Gupta over 4 yearsthis is no longer a valid solution
-
Kardi Teknomo over 2 yearscheck this: stackoverflow.com/questions/3828723/…