How to delete columns in pyspark dataframe

300,502

Solution 1

Reading the Spark documentation I found an easier solution.

Since version 1.4 of spark there is a function drop(col) which can be used in pyspark on a dataframe.

You can use it in two ways

  1. df.drop('age')
  2. df.drop(df.age)

Pyspark Documentation - Drop

Solution 2

Adding to @Patrick's answer, you can use the following to drop multiple columns

columns_to_drop = ['id', 'id_copy']
df = df.drop(*columns_to_drop)

Solution 3

An easy way to do this is to user "select" and realize you can get a list of all columns for the dataframe, df, with df.columns

drop_list = ['a column', 'another column', ...]

df.select([column for column in df.columns if column not in drop_list])

Solution 4

You can use two way:

1: You just keep the necessary columns:

drop_column_list = ["drop_column"]
df = df.select([column for column in df.columns if column not in drop_column_list])  

2: This is the more elegant way.

df = df.drop("col_name")

You should avoid the collect() version, because it will send to the master the complete dataset, it will take a big computing effort!

Solution 5

You could either explicitly name the columns you want to keep, like so:

keep = [a.id, a.julian_date, a.user_id, b.quan_created_money, b.quan_created_cnt]

Or in a more general approach you'd include all columns except for a specific one via a list comprehension. For example like this (excluding the id column from b):

keep = [a[c] for c in a.columns] + [b[c] for c in b.columns if c != 'id']

Finally you make a selection on your join result:

d = a.join(b, a.id==b.id, 'outer').select(*keep)
Share:
300,502
xjx0524
Author by

xjx0524

Updated on April 03, 2022

Comments

  • xjx0524
    xjx0524 about 2 years
    >>> a
    DataFrame[id: bigint, julian_date: string, user_id: bigint]
    >>> b
    DataFrame[id: bigint, quan_created_money: decimal(10,0), quan_created_cnt: bigint]
    >>> a.join(b, a.id==b.id, 'outer')
    DataFrame[id: bigint, julian_date: string, user_id: bigint, id: bigint, quan_created_money: decimal(10,0), quan_created_cnt: bigint]
    

    There are two id: bigint and I want to delete one. How can I do?

  • deusxmach1na
    deusxmach1na about 9 years
    I think I got the answer. Select needs to take a list of strings NOT a list of columns. So do this: keep = [c for c in a.columns] + [c for c in b.columns if c != 'id'] d = a.join(b, a.id==b.id, 'outer').select(*keep)
  • karlson
    karlson about 9 years
    Well, that should do exactly the same thing as my answer, as I'm pretty sure that select accepts either strings OR columns (spark.apache.org/docs/latest/api/python/…). Btw, in your line keep = ... there's no need to use a list comprehension for a: a.columns + [c for c in b.columns if c != 'id'] should achieve the exact same thing, as a.columns is already a list of strings.
  • karlson
    karlson almost 9 years
    @deusxmach1na Actually the column selection based on strings cannot work for the OP, because that would not solve the ambiguity of the id column. In that case you have to use the Column instances in select.
  • deusxmach1na
    deusxmach1na almost 9 years
    All good points. I tried your solution in Spark 1.3 and got errors, so what I posted actually worked for me. And to resolve the id ambiguity I renamed my id column before the join then dropped it after the join using the keep list. HTH anyone else that was stuck like I was.
  • Shane Halloran
    Shane Halloran over 6 years
    Thank-you, this works great for me for removing duplicate columns with the same name as another column, where I use df.select([df.columns[column_num] for column_num in range(len(df.columns)) if column_num!=2]), where the column I want to remove has index 2.
  • mnis.p
    mnis.p over 5 years
    when the data size is large, collect() might cause heap space error. you can also create a new dataframe dropping the extra field by ndf = df.drop('age')
  • DefiniteIntegral
    DefiniteIntegral over 5 years
    I had to reassign the drop results back to the dataframe: df = df.drop(*columns_to_drop)
  • seufagner
    seufagner almost 5 years
    Spark 2.4 (and least versions) doesn't accepts more than one column name.
  • Guido
    Guido about 4 years
    Note that you will not get an error if the column does not exist
  • DataBach
    DataBach about 4 years
    Is it possible to drop columns by index ?
  • frlzjosh
    frlzjosh almost 4 years
    I get an error saying TreeNodeException: Binding attribute, tree: _gen_alias_34#34 after I drop a column, and use .show()
  • Topde
    Topde over 3 years
    @seufagner it does just pass it as a list
  • Juan-Kabbali
    Juan-Kabbali about 3 years
    What the asterisk * means in *columns_to_drop?
  • Clock Slave
    Clock Slave about 3 years
    The * is to unpack the list. (*[a,b,c]) becomes (a,b,c)
  • qwr
    qwr over 2 years
    There is absolutely no reason to use collect for this operation so I removed it from this answer