How to access s3a:// files from Apache Spark?

90,389

Solution 1

Having experienced first hand the difference between s3a and s3n - 7.9GB of data transferred on s3a was around ~7 minutes while 7.9GB of data on s3n took 73 minutes [us-east-1 to us-west-1 unfortunately in both cases; Redshift and Lambda being us-east-1 at this time] this is a very important piece of the stack to get correct and it's worth the frustration.

Here are the key parts, as of December 2015:

  1. Your Spark cluster will need a Hadoop version 2.x or greater. If you use the Spark EC2 setup scripts and maybe missed it, the switch for using something other than 1.0 is to specify --hadoop-major-version 2 (which uses CDH 4.2 as of this writing).

  2. You'll need to include what may at first seem to be an out of date AWS SDK library (built in 2014 as version 1.7.4) for versions of Hadoop as late as 2.7.1 (stable): aws-java-sdk 1.7.4. As far as I can tell using this along with the specific AWS SDK JARs for 1.10.8 hasn't broken anything.

  3. You'll also need the hadoop-aws 2.7.1 JAR on the classpath. This JAR contains the class org.apache.hadoop.fs.s3a.S3AFileSystem.

  4. In spark.properties you probably want some settings that look like this:

    spark.hadoop.fs.s3a.access.key=ACCESSKEY spark.hadoop.fs.s3a.secret.key=SECRETKEY

  5. If you are using hadoop 2.7 version with spark then the aws client uses V2 as default auth signature. And all the new aws region support only V4 protocol. To use V4 pass these conf in spark-submit and also endpoint (format - s3.<region>.amazonaws.com) must be specified.

--conf "spark.executor.extraJavaOptions=-Dcom.amazonaws.services.s3.enableV4=true

--conf "spark.driver.extraJavaOptions=-Dcom.amazonaws.services.s3.enableV4=true

I've detailed this list in more detail on a post I wrote as I worked my way through this process. In addition I've covered all the exception cases I hit along the way and what I believe to be the cause of each and how to fix them.

Solution 2

I'm writing this answer to access files with S3A from Spark 2.0.1 on Hadoop 2.7.3

Copy the AWS jars(hadoop-aws-2.7.3.jar and aws-java-sdk-1.7.4.jar) which shipped with Hadoop by default

  • Hint: If the jar locations are unsure? Running find command as a privileged user can be helpful; commands can be

      find / -name hadoop-aws*.jar
      find / -name aws-java-sdk*.jar
    

into spark classpath which holds all spark jars

  • Hint: We can not directly point the location(It must be in property file) as I want to make an answer generic for distributions and Linux flavors. spark classpath can be identified by find command below

      find / -name spark-core*.jar
    

in spark-defaults.conf

Hint: (Mostly it will be placed in /etc/spark/conf/spark-defaults.conf)

#make sure jars are added to CLASSPATH
spark.yarn.jars=file://{spark/home/dir}/jars/*.jar,file://{hadoop/install/dir}/share/hadoop/tools/lib/*.jar


spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  
spark.hadoop.fs.s3a.access.key={s3a.access.key} 
spark.hadoop.fs.s3a.secret.key={s3a.secret.key} 
#you can set above 3 properties in hadoop level `core-site.xml` as well by removing spark prefix.

in spark submit include jars(aws-java-sdk and hadoop-aws) in --driver-class-path if needed.

spark-submit --master yarn \
  --driver-class-path {spark/jars/home/dir}/aws-java-sdk-1.7.4.jar \
  --driver-class-path {spark/jars/home/dir}/hadoop-aws-2.7.3.jar \
  other options

Note:

Make sure the Linux user with reading privileges, before running the find command to prevent error Permission denied

Solution 3

I got it working using the Spark 1.4.1 prebuilt binary with hadoop 2.6 Make sure you set both spark.driver.extraClassPath and spark.executor.extraClassPath pointing to the two jars (hadoop-aws and aws-java-sdk) If you run on a cluster, make sure your executors have access to the jar files on the cluster.

Solution 4

We're using spark 1.6.1 with Mesos and we were getting lots of issues writing to S3 from spark. I give credit to cfeduke for the answer. The slight change I made was adding maven coordinates to the spark.jar config in the spark-defaults.conf file. I tried with hadoop-aws:2.7.2 but was still getting lots of errors so we went back to 2.7.1. Below are the changes in spark-defaults.conf that are working for us:

spark.jars.packages             net.java.dev.jets3t:jets3t:0.9.0,com.google.guava:guava:16.0.1,com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.1
spark.hadoop.fs.s3a.access.key  <MY ACCESS KEY>
spark.hadoop.fs.s3a.secret.key  <MY SECRET KEY>
spark.hadoop.fs.s3a.fast.upload true

Thank you cfeduke for taking the time to write up your post. It was very helpful.

Solution 5

Here are the details as of October 2016, as presented at Spark Summit EU: Apache Spark and Object Stores.

Key points

  • The direct output committer is gone from Spark 2.0 due to risk/experience of data corruption.
  • There are some settings on the FileOutputCommitter to reduce renames, but not eliminate them
  • I'm working with some colleagues to do an O(1) committer, relying on Apache Dynamo to give us that consistency we need.
  • To use S3a, get your classpath right.
  • And be on Hadoop 2.7.z; 2.6.x had some problems which were addressed by then HADOOP-11571.
  • There's a PR under SPARK-7481 to pull everything into a spark distro you build yourself. Otherwise, ask whoever supplies to the binaries to do the work.
  • Hadoop 2.8 is going to add major perf improvements HADOOP-11694.

Product placement: the read-performance side of HADOOP-11694 is included in HDP2.5; The Spark and S3 documentation there might be of interest —especially the tuning options.

Share:
90,389
tribbloid
Author by

tribbloid

Updated on April 28, 2021

Comments

  • tribbloid
    tribbloid about 3 years

    Hadoop 2.6 doesn't support s3a out of the box, so I've tried a series of solutions and fixes, including:

    deploy with hadoop-aws and aws-java-sdk => cannot read environment variable for credentials add hadoop-aws into maven => various transitive dependency conflicts

    Has anyone successfully make both work?