Quick ls command

30,965

Solution 1

ls -U

will do the ls without sorting.

Another source of slowness is --color. On some linux machines, there is a convenience alias which adds --color=auto' to the ls call, making it look up file attributes for each file found (slow), to color the display. This can be avoided by ls -U --color=never or \ls -U.

Solution 2

I have a directory with 4 million files in it and the only way I got ls to spit out files immediately without a lot of churning first was

ls -1U

Solution 3

Try using:

find . -type f -maxdepth 1

This will only list the files in the directory, leave out the -type f argument if you want to list files and directories.

Solution 4

This question seems to be interesting and I was going through multiple answers that were posted. To understand the efficiency of the answers posted, I have executed them on 2 million files and found the results as below.

$ time tar cvf /dev/null . &> /tmp/file-count

real    37m16.553s
user    0m11.525s
sys     0m41.291s

------------------------------------------------------

$ time echo ./* &> /tmp/file-count

real    0m50.808s
user    0m49.291s
sys     0m1.404s

------------------------------------------------------

$ time ls &> /tmp/file-count

real    0m42.167s
user    0m40.323s
sys     0m1.648s

------------------------------------------------------

$ time find . &> /tmp/file-count

real    0m2.738s
user    0m1.044s
sys     0m1.684s

------------------------------------------------------

$ time ls -U &> /tmp/file-count

real    0m2.494s
user    0m0.848s
sys     0m1.452s


------------------------------------------------------

$ time ls -f &> /tmp/file-count

real    0m2.313s
user    0m0.856s
sys     0m1.448s

------------------------------------------------------

To summarize the results

  1. ls -f command ran a bit faster than ls -U. Disabling color might have caused this improvement.
  2. find command ran third with an average speed of 2.738 seconds.
  3. Running just ls took 42.16 seconds. Here in my system ls is an alias for ls --color=auto
  4. Using shell expansion feature with echo ./* ran for 50.80 seconds.
  5. And the tar based solution took about 37 miuntes.

All tests were done seperately when system was in idle condition.

One important thing to note here is that the file lists are not printed in the terminal rather they were redirected to a file and the file count was calculated later with wc command. Commands ran too slow if the outputs where printed on the screen.

Any ideas why this happens ?

Solution 5

This would be the fastest option AFAIK: ls -1 -f.

  • -1 (No columns)
  • -f (No sorting)
Share:
30,965
Mark Witczak
Author by

Mark Witczak

I haven't done any serious programming in 12 years, but I like to stay on top of the latest technologies.

Updated on July 09, 2022

Comments

  • Mark Witczak
    Mark Witczak almost 2 years

    I've got to get a directory listing that contains about 2 million files, but when I do an ls command on it nothing comes back. I've waited 3 hours. I've tried ls | tee directory.txt, but that seems to hang forever.

    I assume the server is doing a lot of inode sorting. Is there any way to speed up the ls command to just get a directory listing of filenames? I don't care about size, dates, permission or the like at this time.

  • Ben Moss
    Ben Moss almost 16 years
    With 2 million files, that is likely to return only a "command line too long" error.
  • Sousou
    Sousou over 13 years
    Do you know if ls -U|sort is faster than ls?
  • Paul Tomblin
    Paul Tomblin over 13 years
    I don't know. I doubt it, because sort can't complete until it's seen all the records, whether it's done in a separate program in in ls. But the only way to find out is to test it.
  • Scott - Слава Україні
    Scott - Слава Україні over 11 years
    Note: on some systems, ls -f is equivalent to ls -aU; i.e., include all files (even those whose names begin with ‘.’) and don’t sort. And on some systems, -f is the option to suppress sorting, and -U does something else (or nothing).
  • dschu
    dschu over 7 years
    Saved my day! Thanks!
  • Ruslan
    Ruslan about 7 years
    True, coloring is the usual culprit for me: when coloring, ls tries to determine type and mode of each directory entry, resulting in lots of stat(2) calls, thus in loads of disk activity.
  • mwfearnley
    mwfearnley over 6 years
    This would find files in the current directory, and also in any subdirectories.
  • rustyx
    rustyx almost 6 years
    Does not work on BSD. On BSD -U sorts by file creation time.
  • masterxilo
    masterxilo over 3 years
    Absolutely crucial for a huge folder on a network-mounted drive (such as Google Drive on google-drive-ocamlfuse)
  • TiLogic
    TiLogic about 3 years
    This works for both macOS (BSD) and Linux
  • stu
    stu almost 3 years
    the terminal is slow, is has to scroll and do formatting, file writes go to block devices, and in reality, they go to the page cache first, so you're really just writing to memory, which is quicker than a terminal.
  • Ben Farmer
    Ben Farmer over 2 years
    ls -1f seems a lot better than ls -1U for me. They are both similar in output speed, but ls -1U seems un-interuptable.
  • stu
    stu over 2 years
    uninterruptable? it's writing output to a terminal, any attempt to cancel/ctrl-c/etc would have more to do with your terminal than with ls.