How to delete the oldest directory in a given directory?
Solution 1
Parsing the output of ls
is not reliable.
Instead, use find
to locate the directories and sort
to order them by timestamp. For example:
IFS= read -r -d $'\0' line < <(find . -maxdepth 1 -type d -printf '%T@ %p\0' \
2>/dev/null | sort -z -n)
file="${line#* }"
# do something with $file here
What is all this doing?
First, the find
commands locates all directories in the current directory (.
), but not in subdirectories of the current directory (-maxdepth 1
), then prints out:
- A timestamp
- A space
- The relative path to the file
- A NULL character
The timestamp is important. The %T@
format specifier for -printf
breaks down into T
, which indicates "Last modification time" of the file (mtime) and @
, which indicates "Seconds since 1970", including fractional seconds.
The space is merely an arbitrary delimiter. The full path to the file is so that we can refer to it later, and the NULL character is a terminator because it is an illegal character in a file name and thus lets us know for sure that we reached the end of the path to the file.
I have included 2>/dev/null
so that files which the user does not have permission to access are excluded, but error messages about them being excluded are suppressed.
The result of the find
command is a list of all directories in the current directory. The list is piped to sort
which is instructed to:
-
-z
Treat NULL as the line terminator character instead of newline. -
-n
Sort numerically
Since seconds-since-1970 always goes up we want the file whose timestamp was the smallest number. The first result from sort
will be the line containing the smallest numbered timestamp. All that remains is to extract the file name.
The results of the find
, sort
pipeline is passed via process substitution to read
, where it is read as if it were a file on stdin.
In the context of read
we set the IFS
variable to nothing, which means that whitespace won't be inappropriately interpreted as a delimiter. read
is told -r
, which disables escape expansion, and -d $'\0'
, which makes the end-of-line delimiter NULL, matching the ouput from our find
, sort
pipeline.
The first chunk of data, that represents the oldest directory path preceded by its timestamp and a space, is read into the variable line
. Next, parameter substitution is used with the expression #*
, which simply replaces all characters from the beginning of the string up to the first space, including the space, with nothing. This strips off the modification timestamp, leaving only the full path to the file.
At this point the file name is stored in $file
and you can do anything you like with it, including rm -rf "$file"
.
Isn't there a simpler way?
No. Simpler ways are buggy.
If you use ls -t
and pipe to tail
you'll break on files with newlines in the file names. If you rm $(anything)
then files with whitespace in the name will cause breakage. If you rm "$(anything)"
then files with trailing newlines in the name will cause breakage.
Perhaps in specific cases you know for sure that a simpler way is sufficient, but you should never write assumptions like that in to scripts if you can avoid doing so.
Edit
#!/usr/bin/env bash
dir="$1"
min_dirs=3
[[ $(find "$dir" -maxdepth 1 -type d | wc -l) -ge $min_dirs ]] &&
IFS= read -r -d $'\0' line < <(find "$dir" -maxdepth 1 -printf '%T@ %p\0' 2>/dev/null | sort -z -n)
file="${line#* }"
ls -lLd "$file"
A more complete solution to the problem, since it checks the dir count first.
Solution 2
you can use something like the folliwing:
#!/bin/sh
keep=3
while [ `ls -1 | wc -l` -gt $keep ]; do
oldest=`ls -c1 | head -1`
echo "remove $oldest"
rm -rf $oldest
done
it might be better to use find . -type d -maxdepth 1
instead of ls
though. it depends on the naming schema you use for the directories. if they are naturally sorted correctly by their name you can use find
, sort
and head
or tail
to get the oldest/newest directory. the ls
approach uses the ctime attribute to sort.
Related videos on Youtube
AAaa
Updated on September 18, 2022Comments
-
AAaa over 1 year
Possible Duplicate:
Shell script for moving oldest files?I have a backup directory that stores
x
other directories that require backuping. I need something that will run before another directory is moved to the backup, that will check if the number of directories reachedx
and if it has, it will delete the oldest directory.It should be done in a
bash
script. -
AAaa over 12 yearsThanks for the detailed answer! If I want to do this on a given direcoty, not the current one, do I have to cd <dir> before? because I replaced the "." with a path and it doesnt seem to work.
-
Sorpigal over 12 years@AAaa: If you replace the
.
with an (absolute) path it should work. If it were me I'd replace.
with"$dir"
and setdir
to whatever I needed ahead of time. Otherwise, yes, you would need tocd
first. -
AAaa over 12 yearsTHanks. can you help me with a one liner for this? I need to execute it on a remote server via ssh so it would remove the directory I get. Also, Is IFS= required?
-
AAaa over 12 yearsssh host "cd $path;read -r -d $'\0' line < <(find . -maxdepth 1 -type d -printf '%T@ %p\0' 2>/dev/null | sort -z -n);echo "${line#* }"" prints nothing..
-
Sorpigal over 12 years@AAaa: You're running into quote-and-escape issues. You need to make sure that the remote shell is the one that expands the variables, not the local one.
-
Michael Mrozek over 12 yearsThis question is going to get closed as a duplicate, but you might want to adapt this answer for unix.stackexchange.com/questions/22674/… as well, since it's essentially the same procedure there, and this answer is really detailed