How can I curl the output of another command

7,869

Use xargs.

xargs utility [argument ...]

The xargs utility reads space, tab, newline and end-of-file delimited strings from the standard input and executes utility with the strings as arguments.

There are more parameters and options than in this shortened form, of course.


A general example using curl:

$ echo "http://www.google.com" | xargs curl
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>302 Moved</TITLE></HEAD><BODY>
<H1>302 Moved</H1>
The document has moved
<A HREF="http://www.google.de/">here</A>.
</BODY></HTML>

In your specific case, it'd look similar to the following:

./jspider.sh http://www.mypage.com | grep 'resource' | awk '{print $4}' | xargs curl | grep myString
Share:
7,869

Related videos on Youtube

Admin
Author by

Admin

Updated on September 18, 2022

Comments

  • Admin
    Admin over 1 year

    I want to pass curl the output from awk

    ./jspider.sh http://www.mypage.com | grep 'resource' | awk '{print $4}' | curl OUTPUT_FROM_AWK | grep myString
    

    How can I achieve this?!

  • Robert
    Robert over 12 years
    Great! However in my case this doesn't work. If piping to "xargs echo" it also doesn't work. I guess output of jspider is too fast or sth..
  • HikeMike
    HikeMike over 12 years
    In that case, consider adding to your question or creating a new question dealing with your specific problem. I don't have this specific issue, e.g. curl -s "http://superuser.com" | grep -E 'href="http://.*stackexchange\.com' | sed 's|^.*<a href="http://\([^"]*\)">.*$|http://\1|g' | grep -v "<" | xargs curl -s | grep "<title>" works fine for me. (Yeah I know the code's extremely hacky).
  • Robert
    Robert over 12 years
    Oh my.. I have to de-cypher that sed part in order to understand what you are doing, still it isn't working with jspider. I will start using perl now.
  • HikeMike
    HikeMike over 12 years
    @StephanKristyn I extract URLs from hyperlinks, and since it doesn't work well enough, using grep -v I then remove all remaining lines with HTML tag brackets. Just a simple example that shows the approach works in general.