How to write pyspark dataframe to HDFS and then how to read it back into dataframe?
26,364
writing DataFrame to HDFS (Spark 1.6).
df.write.save('/target/path/', format='parquet', mode='append') ## df is an existing DataFrame object.
some of the format options are csv
, parquet
, json
etc.
reading DataFrame from HDFS (Spark 1.6).
from pyspark.sql import SQLContext sqlContext = SQLContext(sc) sqlContext.read.format('parquet').load('/path/to/file')
the format method takes argument such as parquet
, csv
, json
etc.
Author by
Ajg
Updated on January 11, 2020Comments
-
Ajg over 4 years
I have a very big pyspark dataframe. So I want to perform pre processing on subsets of it and then store them to hdfs. Later I want to read all of them and merge together. Thanks.
-
Ajg almost 7 yearsHey I get attributError : DataFrameWriter' object has no attribute 'csv. Also I need to read that dataframe later that is I think in new spark session.
-
rogue-one almost 7 yearswhat is the version of your spark installation?
-
Ajg almost 7 yearsspark version 1.6.1
-
Ajg almost 7 yearsThanks a lot. I have one doubt, while reading what if there are multiple files in that location. How to specify which file I want to read. Thanks
-
rogue-one almost 7 yearsif you want to read only one file among many. you will have to just specify the full file path. if you want to read all the files you can use glob patterns like
*
in the path. -
Ajg almost 7 yearsThanks. Will try that.
-
Ajg almost 7 yearsSorry for one more question: Can you please tell how to delete those dataframes from HDFS afterwards.
-
rogue-one almost 7 yearsto delete the data from hdfs you can use HDFS shell commands like
hdfs dfs -rm -rf <path>
. you can execute this using python subprocess likesubprocess.call(["hdfs", "dfs", "-rm", "-rf",
path])
-
ERJAN almost 4 yearswhat is target path? where does hdfs actually live on my pc?