Find folders of size lesser than x and delete them
The simple approach is to find all directories, get their size and delete them if they are under a given threshold:
find . -maxdepth 1 -type d |
while read dir; do [ $(du -s "$dir") -le 102400 ] && rm -f "$dir"; done
However, that will fail on directory names containing newlines or other strange characters. A safer syntax is:
find . -maxdepth 1 -type d -print0 | while IFS= read -r -d '' dir; do
[ $(du -s "$dir") -le 102400 ] && rm -f "$dir"
done
Since this will process subdirectories before their parents, by the time dir1
is processed, dir2
and dir3
will already have been deleted so its size will be below the threshold and it too will be removed. Whether or not you actually want this will depend on what exactly you are trying to do.
This, however, is a simplistic approach. Consider the following scenario:
$ tree -h
.
`-- [4.0K] dir1
|-- [4.0K] dir2
| `-- [ 80M] file1
`-- [4.0K] dir3
`-- [ 80M] file2
3 directories, 2 files
Here, we have 2 subdirectories under dir1
, each containing an 80M file. The command above will first find dir1
whose size is >100M so it will not be deleted. It will then find dir1/dir2
and dir1/dir3
and delete both of them since they are <100M. The final result will be an empty dir1
whose size, of course, will be <100M since it is empty.
So, this solution will work fine if you only have a single level of subdirectories. If you have more complex file structures, you need to think about how you want to deal with that. One approach would be to use -depth
which ensures that subdirectories are shown first:
find . -depth -maxdepth 1 -type d -print0 | while IFS= read -r -d '' dir; do
[ $(du -s "$dir") -le 102400 ] && rm -f "$dir"
done
This way, dir1
will be processed after dir2
and dir3
so it will be empty, fail the threshold and be deleted as well. Whether or not you want this will depend on what exactly you are trying to do.
Related videos on Youtube
Brettetete
Updated on September 18, 2022Comments
-
Brettetete over 1 year
I want to find all folders (within a folder) which are less than 100mb large and delete them. I actually don't want to use a bash script. But probably there is some neat one-line-loop possibility to do this. But unfortunately my shell knowledge isn't that good
What I've tried
du -sh * | grep -E "^[0-9]{1,2}M" | xargs -0 rm
This won't work since the output of
du -sh * | grep _E ".."
seems to be one single string.What I also have tried is
find . -maxdepth 1 -type d -size 100M [-delete]
But I guess the
-size
flag isn't what I'm looking for-
Edward Torvalds over 9 yearsWhat's wrong with
-size
flag? -
Brettetete over 9 years@edwardtorvalds I've tried
-size 100M
and it did not show up anything
-
-
steeldriver over 9 yearsI think the variant I'd use would be
while read -r -d '' size dir; do [[ $size -lt 100 ]] && echo rm -rf "$dir"; done < <(find -depth -type d -exec du -0sm {} \;)
-
terdon about 9 years@Rmano exactly. Let alone sparse files and the like. This is a fairly complex issue actually.
-
Rmano about 9 yearsYes. The main problem is that the phrase "folder with size lesser than 100Mbyte" is not well defined, really.