spark - scala: not a member of org.apache.spark.sql.Row

34,447

Solution 1

When you convert a DataFrame to RDD, you get an RDD[Row], so when you use map, your function receives a Row as parameter. Therefore, you must use the Row methods to access its members (note that the index starts from 0):

df.rdd.map { 
  row: Row => (row.getString(1) + "_" + row.getString(2), row)
}.take(5)

You can view more examples and check all methods available for Row objects in the Spark scaladoc.

Edit: I don't know the reason why you are doing this operation, but for concatenating String columns of a DataFrame you may consider the following option:

import org.apache.spark.sql.functions._
val newDF = df.withColumn("concat", concat(df("col2"), lit("_"), df("col3")))

Solution 2

You can access every element of a Row like if it was a List or Array, it means using (index), however you can use the method get also.

For example:

df.rdd.map {t =>
  (t(2).toString + "_" + t(3).toString, t)
}.take(5)
Share:
34,447
Edamame
Author by

Edamame

Updated on July 09, 2022

Comments

  • Edamame
    Edamame almost 2 years

    I am trying to convert a data frame to RDD, then perform some operations below to return tuples:

    df.rdd.map { t=>
     (t._2 + "_" + t._3 , t)
    }.take(5)
    

    Then I got the error below. Anyone have any ideas? Thanks!

    <console>:37: error: value _2 is not a member of org.apache.spark.sql.Row
                   (t._2 + "_" + t._3 , t)
                      ^