How do I use HDFS with EMR?

10,374

You can just use hdfs input and output paths like hdfs:///input/.

Say you have a job added to a cluster as follows:

ruby elastic-mapreduce -j $jobflow --jar s3:/my-jar-location/myjar.jar --arg s3:/input --arg s3:/output

instead you can have it as follows if you need it to be on hdfs:

ruby elastic-mapreduce -j $jobflow --jar s3:/my-jar-location/myjar.jar --arg hdfs:///input --arg hdfs:///output

In order to interact with the HDFS on EMR cluster, ssh to the master node and execute general HDFS commands. For example to see the output file, you might do as follows:

hadoop fs -get hdfs://output/part-r-0000 /home/ec2-user/firstPartOutputFile

But if you are working with transient clusters, using in-situ HDFS is discouraged, as you will lose data when cluster is terminated.

Also I have benchmarks which prove that using S3 or HDFS doesn't provide much of performance difference. For a workload of ~200GB: - Job got finished in 22 seconds with S3 as input source - Job got finished in 20 seconds with HDFS as input source

EMR is super optimized to read/write data from/to S3.

For intermediate steps' output writing into hdfs is best. So, say if you have 3 steps in your pipeline, then you may have input/output as follows:

  • Step 1: Input from S3, Output in HDFS
  • Step 2: Input from HDFS, Output in HDFS
  • Step 3: Input from HDFS, Output in S3
Share:
10,374
Admin
Author by

Admin

Updated on July 09, 2022

Comments

  • Admin
    Admin almost 2 years

    I feel that connecting EMR to Amazon S3 is highly unreliable because of the dependency on network speed.

    I can only find links for describing an S3 location. I want to use EMR with HDFS - how do I do this?

  • member555
    member555 over 8 years
    After each Round, i get a lot of output files, how can i cat them into 1 file before proceeding to the next step? Also, how can i change the S3/HDFS block size with amazon emr? Im using the console, should i move to amazon cli?