HDFS write resulting in " CreateSymbolicLink error (1314): A required privilege is not held by the client."
Solution 1
Win 8.1 + hadoop 2.7.0 (build from sources)
run Command Prompt in admin mode
execute etc\hadoop\hadoop-env.cmd
run sbin\start-dfs.cmd
run sbin\start-yarn.cmd
now try to run your job
Solution 2
I recently met exactly the same problem. I tried reformatting namenode but it doesn't work and I believe this cannot solve the problem permanently. With the reference from @aoetalks, I solved this problem on Windows Server 2012 R2 by looking into Local Group Policy.
In conclusion, try the following steps:
- open Local Group Policy (press
Win+R
to open "Run..." - typegpedit.msc
) - expand "Computer Configuration" - "Windows Settings" - "Security Settings" - "Local Policies" - "User Rights Assignment"
- find "Create symbolic links" on the right, and see whether your user is included. If not, add your user into it.
- this will come in effect after logging in next time, so log out and log in.
If this still doesn't work, perhaps it's because you are using a Administrator account. In this case you'll have to disable User Account Control: Run all administrators in Admin Approval Mode
in the same directory (i.e. User Rights Assignment in Group Policy) Then restart the computer to make it take effect.
Reference: https://superuser.com/questions/104845/permission-to-make-symbolic-links-in-windows-7
Solution 3
I encountered the same problem as you. We solved the problem by checking the java environment.
- check
java version
andjavac version
. - ensure that every computer in the clusters has the same java environment.
Admin
Updated on June 04, 2022Comments
-
Admin almost 2 years
Tried to execute sample map reduce program from Apache Hadoop. Got exception below when map reduce job was running. Tried
hdfs dfs -chmod 777 /
but that didn't fix the issue.15/03/10 13:13:10 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 15/03/10 13:13:10 WARN mapreduce.JobSubmitter: No job jar file set. User classes may not be found. See Job or Job#setJar(String). 15/03/10 13:13:10 INFO input.FileInputFormat: Total input paths to process : 2 15/03/10 13:13:11 INFO mapreduce.JobSubmitter: number of splits:2 15/03/10 13:13:11 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1425973278169_0001 15/03/10 13:13:12 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources. 15/03/10 13:13:12 INFO impl.YarnClientImpl: Submitted application application_1425973278169_0001 15/03/10 13:13:12 INFO mapreduce.Job: The url to track the job: http://B2ML10803:8088/proxy/application_1425973278169_0001/ 15/03/10 13:13:12 INFO mapreduce.Job: Running job: job_1425973278169_0001 15/03/10 13:13:18 INFO mapreduce.Job: Job job_1425973278169_0001 running in uber mode : false 15/03/10 13:13:18 INFO mapreduce.Job: map 0% reduce 0% 15/03/10 13:13:18 INFO mapreduce.Job: Job job_1425973278169_0001 failed with state FAILED due to: Application application_1425973278169_0001 failed 2 times due to AM Container for appattempt_1425973278169_0001_000002 exited with exitCode: 1 For more detailed output, check application tracking page:http://B2ML10803:8088/proxy/application_1425973278169_0001/Then, click on links to logs of each attemp t. Diagnostics: Exception from container-launch. Container id: container_1425973278169_0001_02_000001 Exit code: 1 Exception message: CreateSymbolicLink error (1314): A required privilege is not held by the client.
Stack trace:
ExitCodeException exitCode=1: CreateSymbolicLink error (1314): A required privilege is not held by the client. at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
Shell output:
1 file(s) moved. Container exited with a non-zero exit code 1 Failing this attempt. Failing the application. 15/03/10 13:13:18 INFO mapreduce.Job: Counters: 0
-
Duong Trung Nghia about 7 yearsWin 10 + hadoop 2.7.3. It works for me. At least the job gets started. But I encounter
Error: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.xxx
-
Dims about 6 yearsShould I run all jobs from under admin prompt?
-
Costis Aivalis over 2 yearsWorks also for Win 10 + hadoop 3.2.1.