PySpark: How to fillna values in dataframe for specific columns?
118,450
Solution 1
df.fillna(0, subset=['a', 'b'])
There is a parameter named subset
to choose the columns unless your spark version is lower than 1.3.1
Solution 2
Use a dictionary to fill values of certain columns:
df.fillna( { 'a':0, 'b':0 } )
Author by
Rakesh Adhikesavan
I'm a science enthusiast, a technophile, a dog lover and an aspiring Data Scientist.
Updated on April 18, 2020Comments
-
Rakesh Adhikesavan about 4 years
I have the following sample DataFrame:
a | b | c | 1 | 2 | 4 | 0 | null | null| null | 3 | 4 |
And I want to replace null values only in the first 2 columns - Column "a" and "b":
a | b | c | 1 | 2 | 4 | 0 | 0 | null| 0 | 3 | 4 |
Here is the code to create sample dataframe:
rdd = sc.parallelize([(1,2,4), (0,None,None), (None,3,4)]) df2 = sqlContext.createDataFrame(rdd, ["a", "b", "c"])
I know how to replace all null values using:
df2 = df2.fillna(0)
And when I try this, I lose the third column:
df2 = df2.select(df2.columns[0:1]).fillna(0)