How to find a depth of a directory

12,974

Solution 1

One way to do it, assuming GNU find:

find . -type d -printf '%d\n' | sort -rn | head -1

This is not particularly efficient, but it's certainly much better than trying different -maxdepths in turn.

Solution 2

Try this, for example trying to find max depth of tree under /, using

find / -type d

will give every directory under / irrespective of depth. So awk the result with / as delimiter to find the count, and count-1 would give max depth of tree from /, so the command would be:

find / -type d | awk -F"/" 'NF > max {max = NF} END {print max}'
Share:
12,974

Related videos on Youtube

ts01
Author by

ts01

Updated on September 18, 2022

Comments

  • ts01
    ts01 almost 2 years

    Is there a way to find a maximum depth of given directory tree? I was thinking about using find with incrementing maxdepth and comparing number of found directories, but maybe there is a simpler way?

  • Gilles 'SO- stop being evil'
    Gilles 'SO- stop being evil' about 7 years
    updatedb runs find, so this can't be faster than running find just to extract the desired information.
  • styrofoam fly
    styrofoam fly about 7 years
    @Gilles it does not run find. From man updatedb "If the database already exists, its data is reused to avoid rereading directories that have not changed.". We are counting on reusing the database. Moreover, it can be seen as preprocessing which allows you to answer quickly for many queries. On my system with HDD drive updatedb took 2 seconds to finish, so it's really fast compared to other solutions.
  • Gilles 'SO- stop being evil'
    Gilles 'SO- stop being evil' about 7 years
    Using updatedb would be useful to make multiple queries, which is not mentioned in the question. Even so, updatedb records all files, which is a waste: for this, only directories are useful, so running and storing the output of find -type d would be faster.
  • styrofoam fly
    styrofoam fly about 7 years
    @Gilles from man updatedb "updatedb is usually run daily by cron(8) to update the default database.". Unless the OP has created and deleted many files in his filesystem it's much faster to update it and query the database than to run it again. And based on ext4 filesystem documentation it doesn't really matter if you apply -d flag or not. You must read the whole directory table anyway to check filetype flag of each file - and when reading this flag OS will load sequential bytes (with filename) to memory.
  • Gilles 'SO- stop being evil'
    Gilles 'SO- stop being evil' about 7 years
    Unless no directory has been created, moved or deleted since the last updatedb run, it's necessary to run updatedb again. It is possible to optimize find -type d to not read the whole directory table, and in particular to not read the content of a leaf directory: the hard link count on a directory indicates how many subdirectories it has. GNU find does not implement this optimization though.
  • styrofoam fly
    styrofoam fly about 7 years
    It's necessary to rerun updatedb only if you have altered the structure you are now querying about. Anyway, updatedb tracks changes and may return much faster than find. My solution is a suggestion to consider, not a general rule which has to be applied every time. One must know pros and cons of solutions and apply which suits best.