How divide or multiply every non-string columns of a PySpark dataframe with a float constant?

17,786

Solution 1

I don't know about any library function that could do this, but this snippet seems to do job just fine:

CONSTANT = 10.0

for field in df.schema.fields:
    if str(field.dataType) in ['DoubleType', 'FloatType', 'LongType', 'IntegerType', 'DecimalType']:
        name = str(field.name)
        df = df.withColumn(name, col(name)/CONSTANT)


df.show()

outputs:

+-----+----+----+
| name|High| Low|
+-----+----+----+
|Alice|0.43|null|
|  Bob| NaN|89.7|
+-----+----+----+

Solution 2

The below code should solve your problem in a time efficient manner

from pyspark.sql.functions import col

allowed_types = ['DoubleType', 'FloatType', 'LongType', 'IntegerType', 'DecimalType']

df = df.select(*[(col(field.name)/10).name(field.name) if str(field.dataType) in allowed_types else col(field.name) for field in df.schema.fields]

Using "withColumn" iteratively might not be a good idea when the number of columns is large.
This is because PySpark dataframes are immutable, so essentially we will be creating a new DataFrame for each column casted using withColumn, which will be a very slow process.

This is where the above code comes in handy.

Share:
17,786
GeorgeOfTheRF
Author by

GeorgeOfTheRF

Data Scientist

Updated on June 15, 2022

Comments

  • GeorgeOfTheRF
    GeorgeOfTheRF almost 2 years

    My input dataframe looks like the below

    from pyspark.sql import SparkSession
    spark = SparkSession.builder.appName("Basics").getOrCreate()
    
    df=spark.createDataFrame(data=[('Alice',4.300,None),('Bob',float('nan'),897)],schema=['name','High','Low'])
    
    +-----+----+----+
    | name|High| Low|
    +-----+----+----+
    |Alice| 4.3|null|
    |  Bob| NaN| 897|
    +-----+----+----+
    

    Expected Output if divided by 10.0

    +-----+----+----+
    | name|High| Low|
    +-----+----+----+
    |Alice| 0.43|null|
    |  Bob| NaN| 89.7|
    +-----+----+----+
    
  • Rick Moritz
    Rick Moritz almost 7 years
    you're missing DecimalType.
  • Ladenkov Vladislav
    Ladenkov Vladislav almost 6 years
    this code throws NameError: name 'col' is not defined
  • seth127
    seth127 over 5 years
    @LadenkovVladislav col is a pyspark function. You have to import it with from pyspark.sql.functions import col. Or you can just do from pyspark.sql.functions import * and get all the helper functions, but some people don't believe in that.
  • Fabio Magarelli
    Fabio Magarelli almost 5 years
    Hi, what if you want the function to return an integer type? I tried df = df.withColumn(name, round(col(name)/CONSTANT)) but it returns a single decimal number
  • Fabio Magarelli
    Fabio Magarelli almost 5 years
    Sorry I found the solution: after the division you just run this: df = df.withColumn(name, col(name).cast(IntegerType()))
  • Mysterious
    Mysterious about 4 years
    This code is not efficient if you have a long list of columns.