tail: inotify cannot be used, reverting to polling: Too many open files

58,373

Solution 1

This was solved for me by following the instructions on http://peter-butkovic.blogspot.com/2013/08/tail-inotify-resources-exhausted.html

Permanent solution (preserved across restarts) Adding line:

fs.inotify.max_user_watches=1048576

to:

/etc/sysctl.conf

fixed the limit value permanently (even between restarts).

then do a

sysctl -p

Solution 2

I think that answer is not complete (it doesn't say anything about the maximum limit of files open on the system).

There are two limits regarding the maximum number of open files:

  1. Maximum limit of files open per process.

    • You can see which is the value of this limit using: ulimit -n
    • You can change this limit using: ulimit -n new_limit_number
    • Here is a command to get the top 10 processes having many files open:

      lsof | awk '{ print $2; }' | sort -rn | uniq -c | sort -rn | head
      
  2. Maximum limit of files open per system.

    • You can see which is the value of this limit using: cat /proc/sys/fs/file-max
    • You can change this limit using: echo new_limit_number > /proc/sys/fs/file-max
    • Count all open file handles: lsof | wc -l

Solution 3

sysctl fs.inotify.max_user_instances would get limit per user for inotify.

I experienced it, and all limit system wide were high enough, but setting by user are usually relatively low by default, you can increase it in sysctl.conf and reload it whit sysctl -p.

Solution 4

Most likely, you've run out of your inotify watches. Probably, you're running some file synchronization tools(eg. Dropbox) in background?

In Linux, the internal implementation of tail -f command uses the inotify mechanism by default, so as to monitor file changes. If you've run out of all the inotify watches(8192 by default), then inotify -f have to switch to polling to detect changes to that file.

Of course, you can modify the maximum number of inotify watches.

reference:
http://www.quora.com/How-is-tail-f-implemented
http://peter-butkovic.blogspot.com/2013/08/tail-inotify-resources-exhausted.html
https://serverfault.com/questions/510708/tail-inotify-cannot-be-used-reverting-to-polling-too-many-open-files

Solution 5

Run

ps aux | grep tail

to check if too many tail command running, such as a spawn by crontab.

Share:
58,373

Related videos on Youtube

gbag
Author by

gbag

Updated on September 18, 2022

Comments

  • gbag
    gbag over 1 year

    When I try to tail -f catalina.out, I get the error:

    tail: inotify cannot be used, reverting to polling: Too many open files 
    

    I tried the answer in this post: Too many open files - how to find the culprit

    lsof | awk '{ print $2; }' | sort -rn | uniq -c | sort -rn | head
    

    When I ran the above command, the output was

    17 6115
    
    13 6413
    
    10 6417
    
    10 6415
    
    9 6418
    
    9 6416
    
    9 6414
    
    8 6419
    
    4 9 
    
    4 8
    

    I don't see any process having 1024 files open. Isn't the number of files open 17,13,10,10,9? Or am I understanding it wrong? And all these were bash,sshd,apache2, tomcat had number 4.

    I also did lsof | grep tail | wc -l which returned 20. These numbers aren't huge, so why does tail -f catalina.out fail?

  • Alexander Mills
    Alexander Mills over 7 years
    ha this actually worked, tailing way too many files
  • Ruslan Stelmachenko
    Ruslan Stelmachenko over 6 years
    Increasing file descriptors doesn't help me. My tail message was slightly different: tail: inotify resources exhausted. This answer helped me. You can also use sudo sysctl -w fs.inotify.max_user_watches=1048576 && sysctl -p to test if it helps without permanently modifying it. This post also helps nefaria.com/2014/08/tail-inotify-resources-exhausted
  • Christia
    Christia almost 6 years
    How do I translate the data? Can you explain what each piece of information means and what to do about it? For example: root 20161 0.0 0.0 11132 1044 pts/0 S+ 17:27 0:00 grep tail
  • tangxinfa
    tangxinfa over 5 years
    It is a problem only if TOO MANY processes matched, the line matched contain "grep" is generated by the command itself. Please use "pgrep tail" command instead.