Hadoop is not showing my job in the job tracker even though it is running
Solution 1
The resolution to the issue was to configure the job with the extra configuration options for yarn. I made int incorrect assumption that the java hadoop-client api would use the configuration options from the configuration directory. I was able to diagnose the problem by turning on verbose logging using log4j.properties for my unit tests. It showed that the jobs were running local and not being submitted to the yarn resource manager. With a little bit of trial and error I was able to configure the job and have it submitted to the yarn resource manager.
Code
try {
configuration.set("fs.defaultFS", "hdfs://127.0.0.1:9000");
configuration.set("mapreduce.jobtracker.address", "localhost:54311");
configuration.set("mapreduce.framework.name", "yarn");
configuration.set("yarn.resourcemanager.address", "localhost:8032");
Job job = createJob(configuration);
job.waitForCompletion(true);
} catch (Exception e) {
logger.log(Level.SEVERE, "Unable to execute job", e);
}
Solution 2
I see that you are using Hadoop 2.2.0. Are you using MRv1 or MRv2? The daemons are different for MRv2 (YARN). There is no JobTracker for MRv2, though you may see a placeholder page for the JobTracker UI.
The ResourceManager web UI should display your submitted jobs. The default web URL for the ResourceManager is http://<ResourcemanagerHost>:8088
Replace ResourceManagerHost with the IP address of the node where the Resource Manager is running.
You can read more about the YARN architecture at Apache Hadoop YARN
Chris Hinshaw
Updated on July 25, 2022Comments
-
Chris Hinshaw almost 2 years
Problem: When I submit a job to my hadoop 2.2.0 cluster it doesn't show up in the job tracker but the job completes successfully. By this I can see the output and it is running correctly and prints output as it is running.
I have tried muliple options but the job tracker is not seeing the job. If I run a streaming job using the 2.2.0 hadoop it shows up in the task tracker but when I submit it via the hadoop-client api it does not show up in the job tracker. I am looking at the ui interface on port 8088 to verify the job
Environment OSX Mavericks, Java 1.6, Hadoop 2.2.0 single node cluster, Tomcat 7.0.47
Code
try { configuration.set("fs.defaultFS", "hdfs://127.0.0.1:9000"); configuration.set("mapred.jobtracker.address", "localhost:9001"); Job job = createJob(configuration); job.waitForCompletion(true); } catch (Exception e) { logger.log(Level.SEVERE, "Unable to execute job", e); } return null;
etc/hadoop/mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapred.job.tracker</name> <value>localhost:9001</value> </property> </configuration>
etc/hadoop/core-site.xml
<configuration> <property> <name>hadoop.tmp.dir</name> <value>/tmp/hadoop-${user.name}</value> <description>A base for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> </configuration>
-
Chris Hinshaw over 10 yearsI am using the mrv2 api. I am also checking the resource manager at localhost:8088/cluster/apps/SUBMITTED. This is where I do not see my job being submitted with the mrv2 api but if I submit a simple streaming job it shows up in the submitted apps. I guess the question is in MRv2 how do I view my jobs, job history. I am off to read the yarn docs. Thanks for your feedback.
-
amoe over 9 yearsI don't think that
mapreduce.job.tracker
is a real Hadoop property. -
AdrieanKhisbe over 9 years@amoe, it was in 1.x.. Replaced by
mapreduce.jobtracker.address
in the 2.x -
amoe over 9 years@AdrieanKhisbe, not trying to be pedantic, but
mapreduce.job.tracker
doesn't seem to exist at all. Try putting it into google (in quotes). -
AdrieanKhisbe over 9 yearsMy bad, seems my brain had dropped the
uce
while reading your comment -
Chris Hinshaw over 9 yearsI fixed it, I am sure it was a mistype from me. The reason it worked is probably because I was reading Configurations in from an xml config file also.
-
Raghav over 7 yearsi have the same issue, did you find your issue's resolution ? @ChrisHinshaw
-
Chris Hinshaw over 7 years@Raghav if you read my answer it tells you exactly what happened. Check the answer that I posted.
-
marcramser about 5 yearsif you are using webhdfs also check out stackoverflow.com/questions/39637326/accessing-hdfs-remotedly