How to create correct data frame for classification in Spark ML

30,777

Solution 1

You simply need to make sure that you have a "features" column in your dataframe that is of type VectorUDF as show below:

scala> val df2 = dataFixed.withColumnRenamed("age", "features")
df2: org.apache.spark.sql.DataFrame = [features: int, hours_per_week: int, education: string, sex: string, salaryRange: string]

scala> val cmModel = cv.fit(df2) 
java.lang.IllegalArgumentException: requirement failed: Column features must be of type org.apache.spark.mllib.linalg.VectorUDT@1eef but was actually IntegerType.
    at scala.Predef$.require(Predef.scala:233)
    at org.apache.spark.ml.util.SchemaUtils$.checkColumnType(SchemaUtils.scala:37)
    at org.apache.spark.ml.PredictorParams$class.validateAndTransformSchema(Predictor.scala:50)
    at org.apache.spark.ml.Predictor.validateAndTransformSchema(Predictor.scala:71)
    at org.apache.spark.ml.Predictor.transformSchema(Predictor.scala:118)
    at org.apache.spark.ml.Pipeline$$anonfun$transformSchema$4.apply(Pipeline.scala:164)
    at org.apache.spark.ml.Pipeline$$anonfun$transformSchema$4.apply(Pipeline.scala:164)
    at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:51)
    at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:60)
    at scala.collection.mutable.ArrayOps$ofRef.foldLeft(ArrayOps.scala:108)
    at org.apache.spark.ml.Pipeline.transformSchema(Pipeline.scala:164)
    at org.apache.spark.ml.tuning.CrossValidator.transformSchema(CrossValidator.scala:142)
    at org.apache.spark.ml.PipelineStage.transformSchema(Pipeline.scala:59)
    at org.apache.spark.ml.tuning.CrossValidator.fit(CrossValidator.scala:107)
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:67)
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:72)
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:74)
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:76)

EDIT1

Essentially there need to be two fields in your data frame "features" for feature vector and "label" for instance labels. Instance must be of type Double.

To create a "features" fields with Vector type first create a udf as show below:

val toVec4    = udf[Vector, Int, Int, String, String] { (a,b,c,d) => 
  val e3 = c match {
    case "hs-grad" => 0
    case "bachelors" => 1
    case "masters" => 2
  }
  val e4 = d match {case "male" => 0 case "female" => 1}
  Vectors.dense(a, b, e3, e4) 
}

Now to also encode the "label" field, create another udf as shown below:

val encodeLabel    = udf[Double, String]( _ match { case "A" => 0.0 case "B" => 1.0} )

Now we transform original dataframe using these two udf:

val df = dataFixed.withColumn(
  "features",
  toVec4(
    dataFixed("age"),
    dataFixed("hours_per_week"),
    dataFixed("education"),
    dataFixed("sex")
  )
).withColumn("label", encodeLabel(dataFixed("salaryRange"))).select("features", "label")

Note that there can be extra columns / fields present in the dataframe, but in this case I have selected only features and label:

scala> df.show()
+-------------------+-----+
|           features|label|
+-------------------+-----+
|[38.0,40.0,0.0,0.0]|  0.0|
|[28.0,40.0,1.0,1.0]|  0.0|
|[52.0,45.0,0.0,0.0]|  1.0|
|[31.0,50.0,2.0,1.0]|  1.0|
|[42.0,40.0,1.0,0.0]|  1.0|
+-------------------+-----+

Now its upto you to set correct parameters for your learning algorithm to make it work.

Solution 2

As of Spark 1.4, you can use Transformer org.apache.spark.ml.feature.VectorAssembler. Just provide column names you want to be features.

val assembler = new VectorAssembler()
  .setInputCols(Array("col1", "col2", "col3"))
  .setOutputCol("features")

and add it to your pipeline.

Share:
30,777
Dusan Grubjesic
Author by

Dusan Grubjesic

Interested in programming, testing , data science , machine learning, etc...

Updated on April 26, 2020

Comments

  • Dusan Grubjesic
    Dusan Grubjesic about 4 years

    I am trying to run random forest classification by using Spark ML api but I am having issues with creating right data frame input into pipeline.

    Here is sample data:

    age,hours_per_week,education,sex,salaryRange
    38,40,"hs-grad","male","A"
    28,40,"bachelors","female","A"
    52,45,"hs-grad","male","B"
    31,50,"masters","female","B"
    42,40,"bachelors","male","B"
    

    age and hours_per_week are integers while other features including label salaryRange are categorical (String)

    Loading this csv file (lets call it sample.csv) can be done by Spark csv library like this:

    val data = sqlContext.csvFile("/home/dusan/sample.csv")
    

    By default all columns are imported as string so we need to change "age" and "hours_per_week" to Int:

    val toInt    = udf[Int, String]( _.toInt)
    val dataFixed = data.withColumn("age", toInt(data("age"))).withColumn("hours_per_week",toInt(data("hours_per_week")))
    

    Just to check how schema looks now:

    scala> dataFixed.printSchema
    root
     |-- age: integer (nullable = true)
     |-- hours_per_week: integer (nullable = true)
     |-- education: string (nullable = true)
     |-- sex: string (nullable = true)
     |-- salaryRange: string (nullable = true)
    

    Then lets set the cross validator and pipeline:

    val rf = new RandomForestClassifier()
    val pipeline = new Pipeline().setStages(Array(rf)) 
    val cv = new CrossValidator().setNumFolds(10).setEstimator(pipeline).setEvaluator(new BinaryClassificationEvaluator)
    

    Error shows up when running this line:

    val cmModel = cv.fit(dataFixed)
    

    java.lang.IllegalArgumentException: Field "features" does not exist.

    It is possible to set label column and feature column in RandomForestClassifier ,however I have 4 columns as predictors (features) not only one.

    How I should organize my data frame so it has label and features columns organized correctly?

    For your convenience here is full code :

    import org.apache.spark.SparkConf
    import org.apache.spark.SparkContext
    import org.apache.spark.ml.classification.RandomForestClassifier
    import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
    import org.apache.spark.ml.tuning.CrossValidator
    import org.apache.spark.ml.Pipeline
    import org.apache.spark.sql.DataFrame
    
    import org.apache.spark.sql.functions._
    import org.apache.spark.mllib.linalg.{Vector, Vectors}
    
    
    object SampleClassification {
    
      def main(args: Array[String]): Unit = {
    
        //set spark context
        val conf = new SparkConf().setAppName("Simple Application").setMaster("local");
        val sc = new SparkContext(conf)
        val sqlContext = new org.apache.spark.sql.SQLContext(sc)
    
        import sqlContext.implicits._
        import com.databricks.spark.csv._
    
        //load data by using databricks "Spark CSV Library" 
        val data = sqlContext.csvFile("/home/dusan/sample.csv")
    
        //by default all columns are imported as string so we need to change "age" and  "hours_per_week" to Int
        val toInt    = udf[Int, String]( _.toInt)
        val dataFixed = data.withColumn("age", toInt(data("age"))).withColumn("hours_per_week",toInt(data("hours_per_week")))
    
    
        val rf = new RandomForestClassifier()
    
        val pipeline = new Pipeline().setStages(Array(rf))
    
        val cv = new CrossValidator().setNumFolds(10).setEstimator(pipeline).setEvaluator(new BinaryClassificationEvaluator)
    
        // this fails with error
        //java.lang.IllegalArgumentException: Field "features" does not exist.
        val cmModel = cv.fit(dataFixed) 
      }
    
    }
    

    Thanks for help!

  • Dusan Grubjesic
    Dusan Grubjesic almost 9 years
    There is an old api located in package mllib and the points should be LabeledPoint indeed. However, I am trying to use new api located in ml package cause it supports pipelines , cross validation etc.. This new api uses DataFrame as input. e.g. compare these two : RandomForestClassifier from ml which uses DataFrame and RandomForestModel (spark.apache.org/docs/1.4.0/api/scala/…) from mllib
  • Dusan Grubjesic
    Dusan Grubjesic almost 9 years
    Any chance you can show how I can create column named "features" of type VectorUDF from my data ?
  • tuxdna
    tuxdna almost 9 years
    @DusanGrubjesic: I have added code examples. Please check EDIT1
  • Dusan Grubjesic
    Dusan Grubjesic almost 9 years
    this is really great! I am just not sure how we can pass information to the classifier from ML that now these e3 and e4 are categorical features not numerical? Cause in "low level" mllib api it was possible to pass categoricalFeaturesInfo with indexes and number of categories of categorical features. In "high level" ml api , this should be extracted directly from schema.
  • tuxdna
    tuxdna almost 9 years
    In this case the resluting Vector of Double values ( all numerical ) constitute your feature vector. You may want to do standardization, ohe-hot encoding, normalization ... whatever you seem fit for your algorithm but the values in your feature vector have to be all Double. Which low-level API are you refering to?
  • Dusan Grubjesic
    Dusan Grubjesic almost 9 years
    There are 2 packages in Spark for machine learning. One is mllib -this one I call "low level api" and another is ml - this I call "high level api". Anyway , tuxdna thanks for help I will select your answer as best from all the others.
  • tuxdna
    tuxdna almost 9 years
    @DusanGrubjesic: I am glad that it was helpful. And thanks for distinction between mlllib and ml :-)
  • Joshua Taylor
    Joshua Taylor over 8 years
    tuxdna's answer explained the details of the problem, and what the solution has to look like. This answer shows the nice way to accomplish it.
  • gstvolvr
    gstvolvr about 8 years
    This would not work since some of the features are of type String. Great solution for strictly numerical data.
  • max
    max over 7 years
    @gstvolvr You'll need to use StringIndexer first to convert strings to numeric. Might be worth adding this step to the answer for clarity.