How to build a sparkSession in Spark 2.0 using pyspark?

110,831

Solution 1

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('abc').getOrCreate()

now to import some .csv file you can use

df=spark.read.csv('filename.csv',header=True)

Solution 2

As you can see in the scala example, Spark Session is part of sql module. Similar in python. hence, see pyspark sql module documentation

class pyspark.sql.SparkSession(sparkContext, jsparkSession=None) The entry point to programming Spark with the Dataset and DataFrame API. A SparkSession can be used create DataFrame, register DataFrame as tables, execute SQL over tables, cache tables, and read parquet files. To create a SparkSession, use the following builder pattern:

>>> spark = SparkSession.builder \
...     .master("local") \
...     .appName("Word Count") \
...     .config("spark.some.config.option", "some-value") \
...     .getOrCreate()

Solution 3

From here http://spark.apache.org/docs/2.0.0/api/python/pyspark.sql.html
You can create a spark session using this:

>>> from pyspark.sql import SparkSession
>>> from pyspark.conf import SparkConf
>>> c = SparkConf()
>>> SparkSession.builder.config(conf=c)

Solution 4

spark  = SparkSession.builder\
                  .master("local")\
                  .enableHiveSupport()\
                  .getOrCreate()

spark.conf.set("spark.executor.memory", '8g')
spark.conf.set('spark.executor.cores', '3')
spark.conf.set('spark.cores.max', '3')
spark.conf.set("spark.driver.memory",'8g')
sc = spark.sparkContext
Share:
110,831

Related videos on Youtube

haileyeve
Author by

haileyeve

Updated on January 06, 2020

Comments

  • haileyeve
    haileyeve over 4 years

    I just got access to spark 2.0; I have been using spark 1.6.1 up until this point. Can someone please help me set up a sparkSession using pyspark (python)? I know that the scala examples available online are similar (here), but I was hoping for a direct walkthrough in python language.

    My specific case: I am loading in avro files from S3 in a zeppelin spark notebook. Then building df's and running various pyspark & sql queries off of them. All of my old queries use sqlContext. I know this is poor practice, but I started my notebook with

    sqlContext = SparkSession.builder.enableHiveSupport().getOrCreate().

    I can read in the avros with

    mydata = sqlContext.read.format("com.databricks.spark.avro").load("s3:...

    and build dataframes with no issues. But once I start querying the dataframes/temp tables, I keep getting the "java.lang.NullPointerException" error. I think that is indicative of a translational error (e.g. old queries worked in 1.6.1 but need to be tweaked for 2.0). The error occurs regardless of query type. So I am assuming

    1.) the sqlContext alias is a bad idea

    and

    2.) I need to properly set up a sparkSession.

    So if someone could show me how this is done, or perhaps explain the discrepancies they know of between the different versions of spark, I would greatly appreciate it. Please let me know if I need to elaborate on this question. I apologize if it is convoluted.

  • Kyle Kochis
    Kyle Kochis over 7 years
    This answer doesn't specifically address the hive support issue
  • xenocyon
    xenocyon over 7 years
    You need to append .enableHiveSupport() to the other methods called in SparkSession.builder, prior to .getOrCreate()
  • WestCoastProjects
    WestCoastProjects over 6 years
    @xenocyon your addition is important enough it might warrant a separate answer
  • NYCeyes
    NYCeyes over 5 years
    This is the best answer because the others use SparkSession without first showing how to get to it, as you've demonstrated here (via from pyspark.sql import SparkSession).
  • OneCricketeer
    OneCricketeer over 4 years
    I would suggest not using eval. You don't need to use String concatenation when you have a builder object