issue Running Spark Job on Yarn Cluster

34,776

Solution 1

It can mean a lot of things, for us, we get the similar error message because of unsupported Java class version, and we fixed the problem by deleting the referenced Java class in our project.

Use this command to see the detailed error message:

yarn logs -applicationId application_1424284032717_0066

Solution 2

You should remove ".setMaster("local")" in the code.

Solution 3

The command looks correct.

What I've come across is that the "exit code 15" normally indicates a TableNotFound Exception. That usually means there's an error in the code you're submitting.

You can check this by visiting the tracking URL.

Solution 4

For me exit code issue solved by placing hive-site.xml in spark/conf directory.

Share:
34,776
Sachin Singh
Author by

Sachin Singh

Engineer by proffession Java Coder by Passion

Updated on July 05, 2022

Comments

  • Sachin Singh
    Sachin Singh almost 2 years

    I want to run my spark Job in Hadoop YARN cluster mode, and I am using the following command:

    spark-submit --master yarn-cluster 
                 --driver-memory 1g 
                 --executor-memory 1g
                 --executor-cores 1 
                 --class com.dc.analysis.jobs.AggregationJob
                   sparkanalitic.jar param1 param2 param3
    

    I am getting error below, kindly suggest whats going wrong, is the command correct or not. I am using CDH 5.3.1.

    Diagnostics: Application application_1424284032717_0066 failed 2 times due 
    to AM Container for appattempt_1424284032717_0066_000002 exited with  
    exitCode: 15 due to: Exception from container-launch.
    
    Container id: container_1424284032717_0066_02_000001
    Exit code: 15
    Stack trace: ExitCodeException exitCode=15: 
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
        at org.apache.hadoop.util.Shell.run(Shell.java:455)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:197)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)  
    
    Container exited with a non-zero exit code 15
    .Failing this attempt.. Failing the application.
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: root.hdfs
         start time: 1424699723648
         final status: FAILED
         tracking URL: http://myhostname:8088/cluster/app/application_1424284032717_0066
         user: hdfs
    
    2015-02-23 19:26:04 DEBUG Client - stopping client from cache: org.apache.hadoop.ipc.Client@4085f1ac
    2015-02-23 19:26:04 DEBUG Utils - Shutdown hook called
    2015-02-23 19:26:05 DEBUG Utils - Shutdown hook called
    

    Any help would be greatly appreciated.