Reading local parquet files in Spark 2.0

12,961

It looks like you're running into SPARK-15893. The Spark devs changed file reading from 1.6.2 to 2.0.0. From the comments on the JIRA, you should go to the conf\spark-defaults.conf file and add in:

"spark.sql.warehouse.dir=file:///C:/Experiment/spark-2.0.0-bin-without-hadoop/spark-warehouse"

You should then be able to load a parquet file like so:

DataFrame parquet = sqlContext.read().parquet("C:/files/myfile.csv.parquet");
Share:
12,961
Lili
Author by

Lili

Updated on June 26, 2022

Comments

  • Lili
    Lili almost 2 years

    In spark 1.6.2 I am able to read local parquet files by doing a very simple:

    SQLContext sqlContext = new SQLContext(new SparkContext("local[*]", "Java Spark SQL Example"));
    DataFrame parquet = sqlContext.read().parquet("file:///C:/files/myfile.csv.parquet");
    parquet.show(20);
    

    I'm trying to upgrade to Spark 2.0.0 and achieve the same by running:

    SparkSession spark = SparkSession.builder().appName("Java Spark SQL Example").master("local[*]").getOrCreate();
    Dataset<Row> parquet = spark.read().parquet("file:///C:/files/myfile.csv.parquet");
    parquet.show(20);
    

    This is running on Windows, from intellij (Java project), and I am not currently using a hadoop cluster (it will come later, but at the moment I'm just trying to get the logic of processing right and get familiar with the APIs).

    Unfortunately when run with spark 2.0, the code gives an exception:

    Exception in thread "main" java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:C:/[my intellij project path]/spark-warehouse
    at org.apache.hadoop.fs.Path.initialize(Path.java:206)
    at org.apache.hadoop.fs.Path.<init>(Path.java:172)
    at org.apache.spark.sql.catalyst.catalog.SessionCatalog.makeQualifiedPath(SessionCatalog.scala:114)
    at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createDatabase(SessionCatalog.scala:145)
    at org.apache.spark.sql.catalyst.catalog.SessionCatalog.<init>(SessionCatalog.scala:89)
    at org.apache.spark.sql.internal.SessionState.catalog$lzycompute(SessionState.scala:95)
    at org.apache.spark.sql.internal.SessionState.catalog(SessionState.scala:95)
    at org.apache.spark.sql.internal.SessionState$$anon$1.<init>(SessionState.scala:112)
    at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:112)
    at org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:111)
    at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49)
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
    at org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:382)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:143)
    at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:427)
    at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:411)
    at lili.spark.ParquetTest.main(ParquetTest.java:15)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
    Caused by: java.net.URISyntaxException: Relative path in absolute URI: file:C:/[my intellij project path]/spark-warehouse
    at java.net.URI.checkPath(URI.java:1823)
    at java.net.URI.<init>(URI.java:745)
    at org.apache.hadoop.fs.Path.initialize(Path.java:203)
    ... 21 more
    

    I have no idea why it's trying to touch anything in my project's directory - is there any bit of configuration that I'm missing that was sensibly defaulted in spark 1.6.2 but is no longer the case in 2.0? In other words, what's the easiest way to read a local parquet file in spark 2.0 on windows?