How to set (find) atime in seconds?
Solution 1
Note that when you do -mtime <timespec>
, the <timespec>
checks the age of the file at the time find
was started.
Unless you run it in a very small directory tree, find
will take several milliseconds (if not seconds or hours) to crawl the directory tree and do a lstat()
on every file. So having a precision of shorter than a second doesn't necessarily make a lot of sense.
Also note that not all file systems support time stamps with subsecond granularity.
Having said that, there are a few options.
With the find
of many BSDs and the one from schily-tools
, you can do:
find . -atime -1s
To find files that have been last accessed less than one second ago (compared to when find
was started).
With zsh
:
ls -ld -- **/*(Dms-1)
For subsecond granularity, with GNU tools, you can use a reference file whose atime you set with touch
:
touch -ad '0.5 seconds ago' ../reference
find . -anewer ../reference
Or with recent versions of perl
:
perl -MTime::HiRes=lstat,clock_gettime -MFile::Find -le '
$start = clock_gettime(CLOCK_REALTIME) - 0.5;
find(
sub {
my @s = lstat $_;
print $File::Find::name if @s and $s[8] > $start
}, ".")'
Solution 2
With GNU find, you can use -amin
instead of -atime
. As you might guess, it is "File was last accessed n minutes ago."
That said, be aware that most modern systems default to using the relatime
option for filesystems, which saves metadata writes by only updating if the file was actually modified since last access or if a threshold (usually 24 hours) has passed.
So, you will probably either want to change that for the filesystem in question, or else look for another approach. incrond is a handy way to set up scripts to fire on filesystem activity without needing to write your own daemon.
Related videos on Youtube
Comments
-
Ville almost 2 years
How do I set
-atime
in milliseconds, seconds, or minutes? The default is days:-atime n
File was last accessed n*24 hours ago. When find figures out how many 24-hour periods ago the file was last accessed, any fractional part is ignored, so to match -atime +1, a file has to have been accessed at least two days ago.I'd like to run a cron job, say, hourly to check if files in a particular directory have been accessed within that time frame. Entering time as a decimal doesn't seem to work, i.e.
find . -atime 0.042 -print
But maybe there is a better solution anyway – another command perhaps? Or perhaps this can't be done.. for finding files modified in last x minutes there is
-mmin
that allows setting the time in minutes. Perhaps the absence of such option for the access time implies that information is not stored the same way?I'm using Ubuntu 16.04.
-
thrig over 7 yearsSomething like
auditd
or one of the filesystem on-inode-change-notify tools might be better options (like, is it okay to not be informed should cron or the cron job not run for some reason?) -
Baard Kopperud over 7 yearsAnother solution with
find
: As was pointed out, there is-amin
... However, maybe-anewer
could work? This uses the creation-time of a reference-file - which you could let your cron-jobtouch
every hour - and then test if some other file has been accessed after the creation of this reference. So your cron-job would first check the directory for files accessed after it lasttouch
ed the reference one hour ago... then it would re-touch
the reference. -
Baard Kopperud over 7 yearsSo
find directory/ -anewer "ref.file" -type f -print
should work... -
Charles Duffy over 7 yearsIf you're looking for a better solution, it'd be helpful to describe your goal. Is your goal really "finding files accessed in the last N seconds", or is it something like "feeding notices of file access into a queue, no more than N seconds after they occur"? If it's the latter, the ideal approach probably won't involve polling at all.
-
Charles Duffy over 7 yearsPolling is pretty error-prone as an approach to start with -- something holds up your job and you miss data. If you can describe your question in a way that doesn't presume a polling-based solution, an answer is likely to be better -- lower-overhead, more immediate results, and potentially fewer race conditions.
-
Ville over 7 years@CharlesDuffy Checking on the most recent access time of an encrypted "vaulf" folder was the first thought to assess when the vault should be closed. But then I realized there can be long-running processes which might've used a key from the vault to start the process, but that often is not indicative of whether the key still in active use. So I went another route (rather looking at the last activity on the authenticated connection than the last access of the associated key). But this was useful information to be aware of nevertheless.
-