Fetching distinct values on a column using Spark DataFrame

176,807

Solution 1

Well to obtain all different values in a Dataframe you can use distinct. As you can see in the documentation that method returns another DataFrame. After that you can create a UDF in order to transform each record.

For example:

val df = sc.parallelize(Array((1, 2), (3, 4), (1, 6))).toDF("age", "salary")

// I obtain all different values. If you show you must see only {1, 3}
val distinctValuesDF = df.select(df("age")).distinct

// Define your udf. In this case I defined a simple function, but they can get complicated.
val myTransformationUDF = udf(value => value / 10)

// Run that transformation "over" your DataFrame
val afterTransformationDF = distinctValuesDF.select(myTransformationUDF(col("age")))

Solution 2

In Pyspark try this,

df.select('col_name').distinct().show()

Solution 3

This solution demonstrates how to transform data with Spark native functions which are better than UDFs. It also demonstrates how dropDuplicates which is more suitable than distinct for certain queries.

Suppose you have this DataFrame:

+-------+-------------+
|country|    continent|
+-------+-------------+
|  china|         asia|
| brazil|south america|
| france|       europe|
|  china|         asia|
+-------+-------------+

Here's how to take all the distinct countries and run a transformation:

df
  .select("country")
  .distinct
  .withColumn("country", concat(col("country"), lit(" is fun!")))
  .show()
+--------------+
|       country|
+--------------+
|brazil is fun!|
|france is fun!|
| china is fun!|
+--------------+

You can use dropDuplicates instead of distinct if you don't want to lose the continent information:

df
  .dropDuplicates("country")
  .withColumn("description", concat(col("country"), lit(" is a country in "), col("continent")))
  .show(false)
+-------+-------------+------------------------------------+
|country|continent    |description                         |
+-------+-------------+------------------------------------+
|brazil |south america|brazil is a country in south america|
|france |europe       |france is a country in europe       |
|china  |asia         |china is a country in asia          |
+-------+-------------+------------------------------------+

See here for more information about filtering DataFrames and here for more information on dropping duplicates.

Ultimately, you'll want to wrap your transformation logic in custom transformations that can be chained with the Dataset#transform method.

Solution 4

df =  df.select("column1", "column2",....,..,"column N").distinct.[].collect()

in the empty list, you can insert values like [ to_JSON()] if you want the df in a JSON format.

Share:
176,807

Related videos on Youtube

Kazhiyur
Author by

Kazhiyur

Updated on March 12, 2021

Comments

  • Kazhiyur
    Kazhiyur about 3 years

    Using Spark 1.6.1 version I need to fetch distinct values on a column and then perform some specific transformation on top of it. The column contains more than 50 million records and can grow larger.
    I understand that doing a distinct.collect() will bring the call back to the driver program. Currently I am performing this task as below, is there a better approach?

     import sqlContext.implicits._
     preProcessedData.persist(StorageLevel.MEMORY_AND_DISK_2)
    
     preProcessedData.select(ApplicationId).distinct.collect().foreach(x => {
       val applicationId = x.getAs[String](ApplicationId)
       val selectedApplicationData = preProcessedData.filter($"$ApplicationId" === applicationId)
       // DO SOME TASK PER applicationId
     })
    
     preProcessedData.unpersist()  
    
  • Nikunj Kakadiya
    Nikunj Kakadiya over 3 years
    dropDuplicates allows you to maintain all the column information that are in dataframe but perform distinct on the column that is specified to the dropduplicates command.
  • Ravindra
    Ravindra about 2 years
    This works for scala too.