Shell script to move files into a hadoop cluster
Is the &file
a typo on in hadoop dfs -put
line?
If not then this is likely your problem, you're running the command hadoop dfs -put /var/log/qradar/
in the background (the ampersand runs the command in the background), then the command file /user/qradar
, which the shell is looking for on the local path.
My guess is you meant for the following (dollar rather than ampersand):
hadoop dfs -put /var/log/qradar/$file /user/qradar
user1730083
Updated on June 05, 2022Comments
-
user1730083 almost 2 years
This may have been answered somewhere but I haven't found it yet.
I have a simple shell script that I'd like to use to move log files into my Hadoop cluster. The script will be called by Logrotate on a daily basis.
It fails with the following error: "/user/qradar: cannot open `/user/qradar' (No such file or directory)".
#!/bin/bash #use today's date and time day=$(date +%Y-%m-%d) #change to log directory cd /var/log/qradar #move and add time date to file name mv qradar.log qradar$day.log #load file into variable #copy file from local to hdfs cluster if [ -f qradar$day.log ] then file=qradar$day.log hadoop dfs -put /var/log/qradar/&file /user/qradar else echo "failed to rename and move the file into the cluster" >> /var/log/messages fi
The directory /user/qradar does exist and can be listed with the Hadoop file commands. I can also manually move the file into the correct directory using the Hadoop file commands. Can I move files into the cluster in this manner? Is there a better way?
Any thoughts and comments are welcome. Thanks
-
user1730083 over 11 yearshumbly I say yes it is a typo and tossing out the & for the $ fix the issue. So simple and so in front of me, many thanks for proofing.