How to create a DataFrame from a text file in Spark

161,280

Solution 1

Update - as of Spark 1.6, you can simply use the built-in csv data source:

spark: SparkSession = // create the Spark Session
val df = spark.read.csv("file.txt")

You can also use various options to control the CSV parsing, e.g.:

val df = spark.read.option("header", "false").csv("file.txt")

For Spark version < 1.6: The easiest way is to use spark-csv - include it in your dependencies and follow the README, it allows setting a custom delimiter (;), can read CSV headers (if you have them), and it can infer the schema types (with the cost of an extra scan of the data).

Alternatively, if you know the schema you can create a case-class that represents it and map your RDD elements into instances of this class before transforming into a DataFrame, e.g.:

case class Record(id: Int, name: String)

val myFile1 = myFile.map(x=>x.split(";")).map {
  case Array(id, name) => Record(id.toInt, name)
} 

myFile1.toDF() // DataFrame will have columns "id" and "name"

Solution 2

I have given different ways to create DataFrame from text file

val conf = new SparkConf().setAppName(appName).setMaster("local")
val sc = SparkContext(conf)

raw text file

val file = sc.textFile("C:\\vikas\\spark\\Interview\\text.txt")
val fileToDf = file.map(_.split(",")).map{case Array(a,b,c) => 
(a,b.toInt,c)}.toDF("name","age","city")
fileToDf.foreach(println(_))

spark session without schema

import org.apache.spark.sql.SparkSession
val sparkSess = 
SparkSession.builder().appName("SparkSessionZipsExample")
.config(conf).getOrCreate()

val df = sparkSess.read.option("header", 
"false").csv("C:\\vikas\\spark\\Interview\\text.txt")
df.show()

spark session with schema

import org.apache.spark.sql.types._
val schemaString = "name age city"
val fields = schemaString.split(" ").map(fieldName => StructField(fieldName, 
StringType, nullable=true))
val schema = StructType(fields)

val dfWithSchema = sparkSess.read.option("header", 
"false").schema(schema).csv("C:\\vikas\\spark\\Interview\\text.txt")
dfWithSchema.show()

using sql context

import org.apache.spark.sql.SQLContext

val fileRdd = 
sc.textFile("C:\\vikas\\spark\\Interview\\text.txt").map(_.split(",")).map{x 
=> org.apache.spark.sql.Row(x:_*)}
val sqlDf = sqlCtx.createDataFrame(fileRdd,schema)
sqlDf.show()

Solution 3

If you want to use the toDF method, you have to convert your RDD of Array[String] into a RDD of a case class. For example, you have to do:

case class Test(id:String,filed2:String)
val myFile = sc.textFile("file.txt")
val df= myFile.map( x => x.split(";") ).map( x=> Test(x(0),x(1)) ).toDF()

Solution 4

You will not able to convert it into data frame until you use implicit conversion.

val sqlContext = new SqlContext(new SparkContext())

import sqlContext.implicits._

After this only you can convert this to data frame

case class Test(id:String,filed2:String)

val myFile = sc.textFile("file.txt")

val df= myFile.map( x => x.split(";") ).map( x=> Test(x(0),x(1)) ).toDF()

Solution 5

val df = spark.read.textFile("abc.txt")

case class Abc (amount:Int, types: String, id:Int)  //columns and data types

val df2 = df.map(rec=>Amount(rec(0).toInt, rec(1), rec(2).toInt))
rdd2.printSchema

root
 |-- amount: integer (nullable = true)
 |-- types: string (nullable = true)
 |-- id: integer (nullable = true)
Share:
161,280
Rahul
Author by

Rahul

Updated on January 09, 2021

Comments

  • Rahul
    Rahul over 3 years

    I have a text file on HDFS and I want to convert it to a Data Frame in Spark.

    I am using the Spark Context to load the file and then try to generate individual columns from that file.

    val myFile = sc.textFile("file.txt")
    val myFile1 = myFile.map(x=>x.split(";"))
    

    After doing this, I am trying the following operation.

    myFile1.toDF()
    

    I am getting an issues since the elements in myFile1 RDD are now array type.

    How can I solve this issue?