Pipe to multiple files in the shell
Solution 1
If you have tee
./app | tee >(grep A > A.out) >(grep B > B.out) >(grep C > C.out) > /dev/null
(from here)
Solution 2
You can use awk
./app | awk '/A/{ print > "A.out"}; /B/{ print > "B.out"}; /C/{ print > "C.out"}'
Solution 3
You could also use your shell's pattern matching abilities:
./app | while read line; do
[[ "$line" =~ A ]] && echo $line >> A.out;
[[ "$line" =~ B ]] && echo $line >> B.out;
[[ "$line" =~ C ]] && echo $line >> C.out;
done
Or even:
./app | while read line; do for foo in A B C; do
[[ "$line" =~ "$foo" ]] && echo $line >> "$foo".out;
done; done
A safer way that can deal with backslashes and lines starting with -
:
./app | while IFS= read -r line; do for foo in A B C; do
[[ "$line" =~ "$foo" ]] && printf -- "$line\n" >> "$foo".out;
done; done
As @StephaneChazelas points out in the comments, this is not very efficient. The best solution is probably @AurélienOoms'.
Solution 4
If you have multiple cores and you want the processes to be in parallel, you can do:
parallel -j 3 -- './app | grep A > A.out' './app | grep B > B.out' './app | grep C > C.out'
This will spawn three processes in parallel cores. If you want there to be some output to the console, or a master file, it has the advantage of keeping the output in some order, rather that mixing it.
The gnu utility parallel from Ole Tange can be obtained from most repos under the name parallel or moreutils. Source can be obtained from Savannah.gnu.org. Also an introductory instructional video is here.
Addendum
Using the more recent version of parallel (not necessarily the version in your distribution repo), you can use the more elegant construct:
./app | parallel -j3 -k --pipe 'grep {1} >> {1}.log' ::: 'A' 'B' 'C'
Which achieves the result of running one ./app and 3 parallel grep processes in separate cores or threads (as determined by parallel itself, also consider the -j3 to be optional, but it is supplied in this example for instructive purposes).
The newer version of parallel can be obtained by doing:
wget http://ftpmirror.gnu.org/parallel/parallel-20131022.tar.bz2
Then the usual unpack, cd to parallel-{date}, ./configure && make, sudo make install. This will install parallel, man page parallel and man page parallel_tutorial.
Solution 5
Here's one in Perl:
./app | perl -ne 'BEGIN {open(FDA, ">A.out") and
open(FDB, ">B.out") and
open(FDC, ">C.out") or die("Cannot open files: $!\n")}
print FDA $_ if /A/; print FDB $_ if /B/; print FDC $_ if /C/'
Related videos on Youtube
sj755
Updated on September 18, 2022Comments
-
sj755 almost 2 years
I have an application which will produce a large amount of data which I do not wish to store onto the disk. The application mostly outputs data which I do not wish to use, but a set of useful information that must be split into separate files. For example, given the following output:
JUNK JUNK JUNK JUNK A 1 JUNK B 5 C 1 JUNK
I could run the application three times like so:
./app | grep A > A.out ./app | grep B > B.out ./app | grep C > C.out
This would get me what I want, but it would take too long. I also don't want to dump all the outputs to a single file and parse through that.
Is there any way to combine the three operations shown above in such a way that I only need to run the application once and still get three separate output files?
-
evilsoup over 10 yearsAwesome, this could also be rendered as:
./app | tee >(grep A > A.out) >(grep B > B.out) | grep C > C.out
-
acelent over 10 yearsThe question's title is pipe to multiple processes, this answer is about "piping" (dispatching by regex) to multiple files. Since this answer was accepted, the question's title should be changed accordingly.
-
acelent over 10 yearsThis answer is currently the only accurate one, given the question's original title "pipe to multiple processes".
-
sj755 over 10 years@PauloMadeira You are right. What do you think would be a better title?
-
acelent over 10 yearsI've suggested a very small edit "Pipe to multiple files in the shell", it's pending revision, check it out. I was expecting to remove the comment if it was accepted.
-
ruakh over 10 years+1. This is the most generally-applicable answer, since it doesn't depend on the fact that the specific filtering command was
grep
. -
slm over 10 years@PauloMadeira - I've changed the title. Didn't see your edit, but you're correct, the use of processes in the title was incorrect if this is the accepted answer.
-
Stéphane Chazelas over 10 yearsThat assumes the input doesn't contain backslashes or blanks or wildcard characters, or lines that start with
-n
,-e
... It's also going to be terribly inefficient as it means several system calls per line (oneread(2)
per character, the file being open, writing closed for each line...). Generally, usingwhile read
loops to process text in shells is bad practice. -
terdon over 10 years@StephaneChazelas I edited my answer. It should work with backslashes and
-n
etc now. As far as I can tell both versions work OK with blanks though, am I wrong? -
Stéphane Chazelas over 10 yearsNo, the first argument to
printf
is the format. There's no reason for leaving you variables unquoted in there. -
clerksx over 10 yearsThis will also break in bash (and other shells that use cstrings in a similar way) if there are nulls in the input.