Pass command line arguments via sbatch

39,554

Solution 1

The lines starting with #SBATCH are not interpreted by bash but are replaced with code by sbatch. The sbatch options do not support $1 vars (only %j and some others, replacing $1 by %1 will not work). When you don't have different sbatch processes running in parallel, you could try

#!/bin/bash

touch outFile${1}.txt errFile${1}.txt
rm link_out.sbatch link_err.sbatch 2>/dev/null # remove links from previous runs
ln -s outFile${1}.txt link_out.sbatch
ln -s errFile${1}.txt link_err.sbatch

#SBATCH -o link_out.sbatch
#SBATCH -e link_err.sbatch

hostname
# I do not know about the background processing of sbatch, are the jobs still running
# at this point? When they are, you can not delete the temporary symlinks yet.

exit 0

Alternative: As you said in a comment yourself, you could make a masterscript. This script can contain lines like

cat  exampleJob.sh.template | sed -e 's/File.txt/File'$1'.txt/' > exampleJob.sh
# I do not know, is the following needed with sbatch?
chmod +x exampleJob.sh

In your template the #SBATCH lines look like

#SBATCH -o "outFile.txt"
#SBATCH -e "errFile.txt"

Solution 2

I thought I'd offer some insight because I was also looking for the replacement to the -v option in qsub, which for sbatch can be accomplished using the --export option. I found a nice site here that shows a list of conversions from Torque to Slurm, and it made the transition much smoother.

You can specify the environment variable ahead of time in your bash script:

$ var_name='1'
$ sbatch -D `pwd` exampleJob.sh --export=var_name

Or define it directly within the sbatch command just like qsub allowed:

$ sbatch -D `pwd` exampleJob.sh --export=var_name='1'

Whether this works in the # preprocessors of exampleJob.sh is also another question, but I assume that it should give the same functionality found in Torque.

Solution 3

If you pass your commands via the command line, you can actually bypass the issue of not being able to pass command line arguments in the batch script. So for instance, at the command line :

var1="my_error_file.txt"
var2="my_output_file.txt"
sbatch --error=$var1 --output=$var2 batch_script.sh

Solution 4

Using a wrapper is more convenient. I found this solution from this thread.

Basically the problem is that the SBATCH directives are seen as comments by the shell and therefore you can't use the passed arguments in them. Instead you can use a here document to feed in your bash script after the arguments are set accordingly.

In case of your question you can substitute the shell script file with this:

#!/bin/bash
sbatch <<EOT
#!/bin/bash

#SBATCH -o "outFile"$1".txt"
#SBATCH -e "errFile"$1".txt"

hostname

exit 0
EOT

And you run the shell script like this:

bash [script_name].sh [suffix]

And the outputs will be saved to outFile[suffix].txt and errFile[suffix].txt

Solution 5

Something like this works for me and Torque

echo "$(pwd)/slurm.qsub 1" | qsub -S /bin/bash -N Slurm-TEST
slurm.qsub:

#!/bin/bash
hostname > outFile${1}.txt 2>errFile${1}.txt
exit 0
Share:
39,554
Jason
Author by

Jason

Backend Developer at FINRA.

Updated on July 11, 2022

Comments

  • Jason
    Jason almost 2 years

    Suppose that I have the following simple bash script which I want to submit to a batch server through SLURM:

    #!/bin/bash
    
    #SBATCH -o "outFile"$1".txt"
    #SBATCH -e "errFile"$1".txt"
    
    hostname
    
    exit 0
    

    In this script, I simply want to write the output of hostname on a textfile whose full name I control via the command-line, like so:

    login-2:jobs$ sbatch -D `pwd` exampleJob.sh 1
    Submitted batch job 203775
    

    Unfortunately, it seems that my last command-line argument (1) is not parsed through sbatch, since the files created do not have the suffix I'm looking for and the string "$1" is interpreted literally:

    login-2:jobs$ ls
    errFile$1.txt  exampleJob.sh outFile$1.txt
    

    I've looked around places in SO and elsewhere, but I haven't had any luck. Essentially what I'm looking for is the equivalent of the -v switch of the qsub utility in Torque-enabled clusters.

    Edit: As mentioned in the underlying comment thread, I solved my problem the hard way: instead of having one single script that would be submitted several times to the batch server, each with different command line arguments, I created a "master script" that simply echoed and redirected the same content onto different scripts, the content of each being changed by the command line parameter passed. Then I submitted all of those to my batch server through sbatch. However, this does not answer the original question, so I hesitate to add it as an answer to my question or mark this question solved.

  • Jason
    Jason over 9 years
    Thank you! Essentially this solution allows sbatch's code to operate on soft links, but the links themselves point to actual files created by the portion of the shell script that can properly parse command line arguments. I will accept this as my answer.
  • damienfrancois
    damienfrancois about 8 years
    This will not work as #SBATCH parameters must appear before any other command otherwise they are ignored
  • Walter A
    Walter A about 8 years
    @damienfrancois You might be right, I have not actually tried it with Slurm. I looked at your link, and I think my script should be split into a part used by Slurm (with only #SBATCH lines) and a mainscript that will call Slurm. That masterscript should detach itself from his child and does not know when to clean up the temporary links. So I guess the alternative solution (with a template) I gave works better. Do you know if you can put Shell Variables in a <filename pattern> ? Something like --output=File${some_var}.out or --output=File${myshell.sh}.out?
  • damienfrancois
    damienfrancois about 8 years
    Only if you pass the submission script through stdin to sbatch rather than supplying it in as an argument.
  • Patrick
    Patrick about 5 years
    This is the correct answer. Simply add the first two lines above and the final "EOT". You can even continue to submit your script using sbatch rather than bash.
  • icemtel
    icemtel over 4 years
    Worked perfectly for me
  • Josh
    Josh almost 4 years
    To save others the troubleshooting, --export=var_name didn't work for me, I had to use --export=var_name='1'. Also, I had to put --export=var_name="1"` before the path to my script; it didn't work as arranged in this answer.
  • Yacine Mahdid
    Yacine Mahdid almost 4 years
    This worked for me: sbatch -D `pwd` --export=var_name=$var_name exampleJob.sh
  • Reed Espinosa
    Reed Espinosa over 3 years
    What worked for me was sbatch --export=ALL,var_name='1' exampleJob.sh Without the ALL SLURM creates a new environment, separate from the user's environment.
  • josh
    josh about 3 years
    @Josh thank you putting the flag before the script path worked for me to!