Executing script remotely with "curl | bash" feedback
Solution 1
As @Zoredache points out, ssh
relays the status of the remote command as its own exit status, so error detection works transparently over SSH. However, two important points require special consideration in your example.
First, curl
tends to be very lenient, treating many abnormal conditions as success. For example, curl http://serverfault.com/some-non-existent-url-that-returns-404
actually has an exit status of 0. I find this behavior counterintuitive. To treat those conditions as errors, I like to use the -fsS
flags:
- The
--fail
flag suppresses the output when a failure occurs, so thatbash
won't get a chance to execute the web server's 404 error page as if it were code. - The
--silent --show-error
flags, together, provide a reasonable amount of error reporting.--silent
suppresses all commentary fromcurl
.--show-error
re-enables error messages, which are sent to STDERR.
Second, you have a pipe, which means that a failure could occur in either the first or the second command. From the section about Pipelines in bash(1):
The return status of a pipeline is the exit status of the last command, unless the
pipefail
option is enabled (see The Set Builtin). Ifpipefail
is enabled, the pipeline’s return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully.
Side note: The bash
documentation is relevant not because you pipe to bash
, but because (I assume) it is your remote user's login shell, and would therefore be the program that interprets the remote command line and handles the execution of the pipeline. If the user has a different login shell, then refer to that shell's documentation.
As a concrete example,
( echo whoami ; false ) | bash
echo $?
yields the output
login
0
demonstrating that the bash
at the end of the pipeline will mask the error status returned by false
. It will return 0 as long as it successfully executes whoami
.
In contrast,
set -o pipefail
( echo whoami ; false ) | bash
echo $?
yields
login
1
so that the failure in the first half of the pipeline is reported.
Putting it all together, then, the solution should be
ssh [email protected] 'set -s pipefail ; curl -fsS http://some_server/script.sh | bash'
That way, you will get a non-zero exit status if any of the following returns non-zero:
ssh
- The remote login shell
curl
- The
bash
at the end of the pipeline
Furthermore, if curl -fsS
detects an abnormal HTTP status code, then it will:
- suppress its STDOUT, so that nothing will get piped to
bash
to be executed - return a non-zero value which is properly propagated all the way
- print a one-line diagnostic message to its STDERR, which is also propagated all the way
Solution 2
That's a horrible hack. If you want remote execution, use something that does remote execution properly, such as func or mcollective.
Solution 3
When SSH returns it should emit the exit code from the script.
Try ssh user@host 'echo "exit 2" | bash' ; echo $?
. You should see a value of 2
returned.
Just write lots of good error-checking into your script, and make sure your script exits with a useful errors, and exit codes. Make sure your script returns non-zero exit codes for any errors.
Related videos on Youtube
Yriuns
Updated on September 18, 2022Comments
-
Yriuns almost 2 years
I expect these two goroutines to block forever for the reasons as below but it doesn't. Why?
The channel has no buffer and will be waiting for
receive()
to receive.send()
hold the lock sonum := <-s.ch
inreceive()
has no chance to execute.Block forever
What's wrong?
package main import ( "sync" "fmt" ) type S struct { mu sync.Mutex ch chan int wg sync.WaitGroup } func (s *S) send() { s.mu.Lock() s.ch <- 5 s.mu.Unlock() s.wg.Done() } func (s *S) receive() { num := <-s.ch fmt.Printf("%d\n", num) s.wg.Done() } func main() { s := new(S) s.ch = make(chan int) s.wg.Add(2) go s.send() go s.receive() s.wg.Wait() }
-
Zoredache about 11 yearsI certainly isn't perfect, but it is useful. I use a similar method to bootstrap my configuration management system. You certainly don't want to use that for everything, but it does have its uses.
-
rmonjo about 11 yearsTend to agree with @Zoredache, however, interested to hear why this is a horrible hack ?
-
Zoredache about 11 years@user1437126, it is hacky if you don't have strong security in your setup. There are lots of ways that
curl url | bash
can fail, and result in you trashing your system. If you runcurl http://remote | bash
what happens if someone manages to MITM you and replace what you were expecting on the remote with a script that doesrm -rf /
. What happens if the remote system is down? -
Dennis Kaarsemaker about 11 yearsBootstrapping your config management should really be done by however you install your systems, e.g. kickstart for redhat. But ok, as far as reasons to use this horrible hack go, this one is about the only one I can agree with :)
-
rmonjo about 11 yearsOk I see your point. In my setup, the machine hosting the scripts is the one doing the ssh call. Accessing scripts is secured by a token, ssh communication does the rest. We chose the solution since maintaining scripts file is way easier than hardcoded script within code (passed to the remote server as strings in ssh command).
-
rmonjo about 11 yearsWow awesome, thank you very much !! This is exactly what I needed !
-
rmonjo about 11 years
set -o pipefail
works fine however,shopt -o pipefail
doesn't set the exit code of the curl command -
Yriuns about 6 yearsThe variable
s
is locked bysend()
function. Andreceive()
try to read the variable froms.ch
, so it has to wait the lock be unlocked. Do I misunderstand theMutex
?