Getting directory listing over http

103,880

Solution 1

I just figured out a way to do it:

$ wget --spider -r --no-parent http://some.served.dir.ca/

It's quite verbose, so you need to pipe through grep a couple of times depending on what you're after, but the information is all there. It looks like it prints to stderr, so append 2>&1 to let grep at it. I grepped for "\.tar\.gz" to find all of the tarballs the site had to offer.

Note that wget writes temporary files in the working directory, and doesn't clean up its temporary directories. If this is a problem, you can change to a temporary directory:

$ (cd /tmp && wget --spider -r --no-parent http://some.served.dir.ca/)

Solution 2

What you are asking for best served using FTP, not HTTP.

HTTP has no concept of directory listings, FTP does.

Most HTTP servers do not allow access to directory listings, and those that do are doing so as a feature of the server, not the HTTP protocol. For those HTTP servers, they are deciding to generate and send an HTML page for human consumption, not machine consumption. You have no control over that, and would have no choice but to parse the HTML.

FTP is designed for machine consumption, more so with the introduction of the MLST and MLSD commands that replace the ambiguous LIST command.

Solution 3

The following is not recursive, but it worked for me:

$ curl -s https://www.kernel.org/pub/software/scm/git/

The output is HTML and is written to stdout. Unlike with wget, there is nothing written to disk.

-s (--silent) is relevant when piping the output, especially within a script that must not be noisy.

Whenever possible, remember not to use ftp or http instead of https.

Solution 4

If it's being served by http then there's no way to get a simple directory listing. The listing you see when you browse there, which is the one wget is retrieving, is generated by the web server as an HTML page. All you can do is parse that page and extract the information.

Share:
103,880
ajwood
Author by

ajwood

Updated on July 09, 2022

Comments

  • ajwood
    ajwood almost 2 years

    There is a directory that is being served over the net which I'm interested in monitoring. Its contents are various versions of software that I'm using and I'd like to write a script that I could run which checks what's there, and downloads anything that is newer that what I've already got.

    Is there a way, say with wget or something, to get a a directory listing. I've tried using wget on the directory, which gives me html. To avoid having to parse the html document, is there a way of retrieving a simple listing like ls would give?

  • ajwood
    ajwood over 13 years
    The reason the page is served in the first place is to provide users with a source for the software. If it's intended to be viewed in a browser, it only seems reasonable that one could expect to access it from a script.
  • ajwood
    ajwood over 13 years
    If there was an index.html, or a similar page, it would make sense to disallow directory listing for security reasons. It seems odd to me that if a directory is being served raw (well, having html generated to make it pretty) it should be fully accessible for something as harmless as a directory listing.
  • Julian Reschke
    Julian Reschke almost 10 years
    Actually, HTTP does have this concept, it's called WebDAV, and it's an optional extension. See RFC 4918.
  • Remy Lebeau
    Remy Lebeau almost 10 years
    WebDAV runs on top of HTTP but is not part of HTTP itself. Just like HTTP runs on top of TCP but is not part of TCP itself. You cannot use WebDAV to talk to any arbitrary HTTP server. It has to be implemented and enabled by each server. Like you said, it is optional.
  • Julian Reschke
    Julian Reschke almost 10 years
    It's optional, but the remainder of your comparison is misleading. TCP and HTTP are different networking layers, while PROPFIND and GET are in the exactly same layer.
  • ajwood
    ajwood over 7 years
    @A-B-B Are you sure about that? The --spider option makes this not actually download anything
  • Asclepius
    Asclepius over 7 years
    I tried wget --spider -r --no-parent https://www.kernel.org/pub/software/scm/git/ and it started to create a nested directory structure on disk – this won't work. I don't want anything written to disk, even if it's a single directory.
  • ajwood
    ajwood over 7 years
    Oh yeah, seems wget needs to write intermediate files. It cleans up the files but leaves the directory tree.. Could you just cd into /tmp while it runs? (cd /tmp && wget --spider -r --no-parent https://www.kernel.org/pub/software/scm/git/)
  • Mark N Hopgood
    Mark N Hopgood over 5 years
    I added -k as an option to skip certificate checking on https for my application
  • Michael Dimmitt
    Michael Dimmitt over 4 years
    ( (cd /tmp && wget --spider -r --no-parent kernel.org/pub/software/scm/git) 2>&1 | grep wall ) 🤔, grep works when 2>&1 is appended as explained in the answer. - It was not clear to me at first glance.
  • Aaron
    Aaron about 4 years
    this command is doing what I want, but saving the output is driving me nuts. When I try to redirect STDOUT everything seems to go into oblivion. Looking for insights, I'm probably missing something obvious.
  • ajwood
    ajwood about 4 years
    @liang I don't know the details of what all it prints, but the interesting stuff for this purposes is on STDERR. Did you try wget --spider -r --no-parent http://some.served.dir.ca/ 2>&1 | next_command ...
  • Aaron
    Aaron about 4 years
    @ajwood yes, I tried that. I think my problem is executing that command in a subshell. I'm still playing around with it, and making some progress. I want to do post-processing on the grep results. For some clarity, I'm trying to extract the list of categories from Internet Archive and download the XML metadata.