Can I pipe stdout on one server to stdin on another server?
Solution 1
This is an unabashed yes.
When one uses ssh
to execute a command on a remote server it performs some kind of fancy internal input/output redirection. In fact, I find this to be one of the subtly nicer features of OpenSSH. Specifically, if you use ssh
to execute an arbitrary command on a remote system, then ssh will map STDIN
and STDOUT
to that of the command being executed.
For the purposes of an example, let's assume you want to create a backup tarball, but don't want to, or can't, store it locally. Let's have a gander at this syntax:
$ tar -cf - /path/to/backup/dir | ssh remotehost "cat - > backupfile.tar"
We're creating a tarball, and writing it to STDOUT
, normal stuff. Since we're using ssh to execute a remote command, STDIN gets mapped to the STDIN
of cat
. Which we then redirect to a file.
Solution 2
A convenient way of piping data between hosts when you don't need to worry about security over the wire is using netcat
on both ends on the connection.
This also lets you set them up asynchronously:
On the "receiver" (really, you'll have two-way communication, but it's easier to think of it like this), run:
nc -l -p 5000 > /path/to/backupfile.tar
And on the "sender", run:
tar cf - /path/to/dir | nc 1.2.3.4 5000
Solution 3
A very powerful tool for creating uni- and bidirectional connections is socat
. For a short look at the possibilities, look at the examples in its manpage.
It replaces netcat
and similar tools completely and has support for ssl encrypted connections. For beginners, it might be not simple enough, but it is at least good to know that it exists.
Solution 4
TL;DR
Things only get slightly more complicated when you have a bastion server that must be used.
-
You can pass
ssh
as the command tossh
like so:cat local_script.sh | ssh -A usera@bastion ssh -A userb@privateserver "cat > remote_copy_of_local_script.sh; bash remote_copy_of_local_script.sh"
Beware of pseudo-terminals
Note that the point of key importance here is that ssh
, like most tools, just treats stdout
and stdin
correct by default.
However, when you start to see option like Disable pseudo-terminal allocation.
and Force pseudo-terminal allocation.
you may need to do a little trial and error. But, as a general rule you don't want to alter tty
behavior unless you are trying to fix garbled/binary junk in a terminal emulator (what a human types in).
For example, I tend to use -At
so that my workstation's ssh-agent gets forwarded, and so that running tmux remotely doesn't barf binary (like so ssh -At bastion.internal tmux -L bruno attach
). And, for docker too (like so sudo docker exec -it jenkins bash
).
However, those two -t
flags cause some hard to track down data corruption when I try to do something like this:
# copy /etc/init from jenkins to /tmp/init in testjenkins running as a container
ssh -A bastion.internal \
ssh -A jenkins.internal \
sudo tar cf - -C /etc init | \
sudo docker exec -i testjenkins \
bash -c 'tar xvf - -C /tmp'
# note trailing slashes to make this oneliner more readable.
Solution 5
I find this to be the easiest, after setting up no password handshaking between servers for the user you are running the command as:
Uncompressed
tar cf - . | ssh servername "cd /path-to-dir && tar xf -"
Compression on the fly
tar czf - . | ssh servername "cd /path-to-dir && tar xzf -"
Wesley
Updated on September 18, 2022Comments
-
Wesley over 1 year
stdout
on one CentOS server needs to be piped tostdin
on another CentOS server. Is this possible?Update
ScottPack, MikeyB and jofel all have valid answers. I awarded the answer to Scott because, even though my question didn't specify security as a requirement, it's always nice to be safe. However, the other two fellows' suggestions will also work.
-
Admin about 12 yearsIt's worth noting that the (only) major advantage of the non-ssh approach is throughput speed; if you're on a fast network and security is unnecessary this may be worth the extra inconvenience of typing two commands into two windows.
-
Admin about 2 yearsShort answer to the question: yes, there are many ways to do this on every operating system.
-
-
BBagi about 12 yearsThat isn't any sort of "fancy internal input/output redirection" - just the plain, boring regular stuff. ssh reads from STDIN, just like any other tool, and passes it to the remote process. :)
-
Peter Todd about 12 years@DanielPittman: But it's just so much more fun to call it "fancy internal" garbage.
-
Peter Todd about 12 years@MikeyB: Good point. Netcat is a clear-text protocol so be careful with sensitive data. I tend to use netcat for more specific things like network drive acquisitions (ala dd) over a local network and port scanning.
-
MikeyB about 12 years
securely
was not in the requirements :) -
Peter Todd about 12 years@MikeyB: You people what with your flying and your pants seats!
-
Wesley about 12 years@MikeyB Add that as an answer! It's at least one possibility.
-
Wesley about 12 years@jofel Add that as an answer too! I'm intrigued by
socat
- never heard of it before. -
Wesley about 12 yearsVery good to know. This is good if the physical connection is trusted like perhaps a backup network or if the connection is already tunneled.
-
Samuel Edwin Ward about 12 yearsFor something like this, you'd often want to use ssh -C to compress the data in transfer.
-
Samuel Edwin Ward about 12 yearsOr if the data is something that's public anyway.
-
Splanger about 12 years+1 netcat is an invaluable tool, especially when you don't have a ssh server running.
-
YoloTats.com about 12 years@WesleyDavid: To your "Update": Just for completeness, I've added to my answer that socat has SSL support, so encryption is with socat possible, too. However, ssh is in most cases the better and easier solution, so I would have chosen ScottPack's answer,too.
-
Peter Todd about 12 years@SamuelEdwinWard: Depends on your infrastructure and where the bottleneck is. A few days ago I was backing up about 14GB of data across a 1Gbps link. Even with two fast machines, enabling compression was 5 times slower. Without compression it was IO bound, with compression it was CPU bound.
-
Samuel Edwin Ward about 12 yearsYes, in some situations it will be slower.
-
Anthon about 10 yearsUsing compression on the tar file is a very bad idea if your
ssh
is already configured for compression. -
Tom Hale over 7 years@Anthon Why so bad, and how would one check if ssh compression is already enabled?
-
mbaljeetsingh about 7 years
tar
has a-C path
flag that works for both thec
andx
commands. You don't have to put a separatecd
command in there. (But it is good to note that you can run more than a single command.) -
mbaljeetsingh about 7 yearsGood point. I should have known better.