Using awk sed or grep to parse URLs from webpage source

11,046

Solution 1

If you are only parsing something like < a > tags, you could just match the href attribute like this:

$ cat file.html | grep -o -E 'href="([^"#]+)"' | cut -d'"' -f2 | sort | uniq

That will ignore the anchor and also guarantee that you have uniques. This does assume that the page has well-formed (X)HTML, but you could pass it through Tidy first.

Solution 2

lynx -dump http://www.ibm.com

And look for the string 'References' in the output. Post-process with sed if you need to.

Using a different tool sometimes makes the job simpler. Once in a while, a different tool makes the job dead simple. This is one of those times.

Share:
11,046
Astron
Author by

Astron

Updated on June 17, 2022

Comments

  • Astron
    Astron almost 2 years

    I am trying to parse the source of a downloaded web-page in order to obtain the link listing. A one-liner would work fine. Here's what I've tried thus far:

    This seems to leave out parts of the URL from some of the page names.

    $ cat file.html | grep -o -E '\b(([\w-]+://?|domain[.]org)[^\s()<>]+(?:\([\w\d]+\)|([^[:punct:]\s]|/)))'|sort -ut/ -k3
    

    This gets all of the URL's but I do not want to include links that have/are anchor links. Also I want to be able to specify the domain.org/folder/:

    $ awk 'BEGIN{
    RS="</a>"
    IGNORECASE=1
    }
    {
      for(o=1;o<=NF;o++){
        if ( $o ~ /href/){
          gsub(/.*href=\042/,"",$o)
          gsub(/\042.*/,"",$o)
          print $(o)
        }
      }
    }' file.html