How to find a depth of a directory
12,974
Solution 1
One way to do it, assuming GNU find
:
find . -type d -printf '%d\n' | sort -rn | head -1
This is not particularly efficient, but it's certainly much better than trying different -maxdepth
s in turn.
Solution 2
Try this, for example trying to find max depth of tree under /
, using
find / -type d
will give every directory under /
irrespective of depth. So awk
the result with /
as delimiter to find the count, and count-1
would give max depth of tree from /
, so the command would be:
find / -type d | awk -F"/" 'NF > max {max = NF} END {print max}'
Related videos on Youtube
Author by
ts01
Updated on September 18, 2022Comments
-
ts01 almost 2 years
Is there a way to find a maximum depth of given directory tree? I was thinking about using find with incrementing maxdepth and comparing number of found directories, but maybe there is a simpler way?
-
Gilles 'SO- stop being evil' about 7 years
updatedb
runsfind
, so this can't be faster than runningfind
just to extract the desired information. -
styrofoam fly about 7 years@Gilles it does not run
find
. Fromman updatedb
"If the database already exists, its data is reused to avoid rereading directories that have not changed.". We are counting on reusing the database. Moreover, it can be seen as preprocessing which allows you to answer quickly for many queries. On my system with HDD driveupdatedb
took 2 seconds to finish, so it's really fast compared to other solutions. -
Gilles 'SO- stop being evil' about 7 yearsUsing updatedb would be useful to make multiple queries, which is not mentioned in the question. Even so,
updatedb
records all files, which is a waste: for this, only directories are useful, so running and storing the output offind -type d
would be faster. -
styrofoam fly about 7 years@Gilles from
man updatedb
"updatedb
is usually run daily bycron
(8) to update the default database.". Unless the OP has created and deleted many files in his filesystem it's much faster to update it and query the database than to run it again. And based on ext4 filesystem documentation it doesn't really matter if you apply-d
flag or not. You must read the whole directory table anyway to checkfiletype
flag of each file - and when reading this flag OS will load sequential bytes (with filename) to memory. -
Gilles 'SO- stop being evil' about 7 yearsUnless no directory has been created, moved or deleted since the last
updatedb
run, it's necessary to runupdatedb
again. It is possible to optimizefind -type d
to not read the whole directory table, and in particular to not read the content of a leaf directory: the hard link count on a directory indicates how many subdirectories it has. GNU find does not implement this optimization though. -
styrofoam fly about 7 yearsIt's necessary to rerun
updatedb
only if you have altered the structure you are now querying about. Anyway,updatedb
tracks changes and may return much faster thanfind
. My solution is a suggestion to consider, not a general rule which has to be applied every time. One must know pros and cons of solutions and apply which suits best.