Deleting oldest files to free space as needed on Linux
Instead of a find -mtime, do an ls -t, for example:
DIR=/tmp
FREESPACE=1000000
find $DIR -type f | xargs ls -1rt | while read f ; do
if [ `df --output=avail $DIR | tail -1` -ge $FREESPACE ] ; then
break
fi
# rm -f $f
done
Uncomment the rm -f $f
to have it actually work...
However, I typically would do things a little differently. I'd put my cron logs in /var/log and let something like logrotate handle managing disk space. Or as another alternative, I'd put the logs into a separate subdirectory such as /tmp/mycronjob/*, and then keep a fixed number of days of log files. It's not as flexible as monitoring disk space, but it's also a straight forward find /tmp/mycronjob -mtime +30 -exec rm {} \;
and more predictable.
Related videos on Youtube
yasith
Updated on September 18, 2022Comments
-
yasith almost 2 years
I have a cron job that writes logs to
/tmp
. I want to automatically delete the oldest files when the free space in the disk becomes less than 1GB.I'm trying to do
df -ah /tmp
, then take the 2nd line, usecut
to get theavail
column. Then in afor
loop, keep deleting the files older than n days with-mtime
until there's enough free space.I probably want to set this up as a cron job that runs daily as well.
-
Morgan Christiansson almost 7 yearsI think the rm -f should be after the conditional. Otherwise it will always delete at least 1 directory even if there's enough available space.
-
TOertel over 6 years@MorganChristiansson true.
-
Kamil Maciorowski over 6 yearsParsing
ls
is flawed in general.