disk space keeps filling up on EC2 instance with no apperent files/directories

13,743

Solution 1

If a file that has been deleted is still open by a process, the space will not be reclaimed until the process closes the file (or is killed). If you can not identify the process that is holding a file open, then a reboot will help as that will close all running processes (and so close all open files).

Another consideration is filesystem corruption. As this is your root filesystem you may need to reboot and force a filesystem check on restart (shutdown -rF now). Be sure that you are configured to perform a non-interactive scan+fix unless you have KVM access or similar (so you can interact during the boot process) otherwise your remote machine will hang waiting for local input if it finds errors during the check.

Edit: (as per question in comment)

If you know the process that is holding the file open, you can just restart that particular process (either via service stop/start/restart scripts or by killing and restarting more manually) rather than the whole instance.

Also some programs support being able to reset themselves without restarting which will usually include closing and restarting log files (solving your problem if it is indeed due to a deleted log file that is still open) in response to being sent a SIGHUP signal (via kill). Resetting processes this way is sometimes preferable as it reduces (often to zero) the amount of time a server process is unable to accept new connections. This is often what happens when you run /etc/init.d/<service> reload instead of /etc/init.d/<service> restart (if fact I've seen restart implemented this way so to do a proper full reset you have to do /etc/init.d/<service> stop; /etc/init.d/<service> start).

Solution 2

Managed to reclaim the space without a restart for the process holding it open through the fd links in /proc//fd/.

1) get to the holding process file descriptors path:

cd /proc/`lsof|grep '<deleted_file>'|head -1|awk '{print $2}'`/fd

2) find process fd link:

ll | grep <deleted_file>

3) override with blank (all data will be lost)

 > <fd>
Share:
13,743

Related videos on Youtube

sasher
Author by

sasher

Updated on September 18, 2022

Comments

  • sasher
    sasher over 1 year

    How come os shows 6.5G used but I see only 3.6G in files/directories?

    Running as root on an Amazon Linux AMI (seems like Centos), lots of free memory available, no swapping going on, no apparent file descriptors issue. The only thing I can think of is a log file that was deleted while applications append to it.

    Disk space usage is slowly but continuously rising towards full capacity (~1k/min with very small decreases from time to time)

    Any explanation? Solution?

    du --max-depth=1 -h /
    1.2G /usr
    4.0K /cgroup
    22M /lib64
    11M /sbin
    19M /etc
    52K /dev
    2.1G /var
    4.0K /media
    0 /sys
    4.0K /selinux
    du: cannot access /proc/14024/task/14024/fd/4': No such file or directory du: cannot access<br/> /proc/14024/task/14024/fdinfo/4': No such file or directory du:
    cannot access /proc/14024/fd/4': No such file or directory du: cannot<br/> access/proc/14024/fdinfo/4': No such file or directory 0 /proc
    18M /home
    4.0K /logs
    8.1M /bin
    16K /lost+found
    12M /tmp
    4.0K /srv
    35M /boot
    79M /lib
    56K /root
    67M /opt
    4.0K /local
    4.0K /mnt
    3.6G /

    df -h

    Filesystem Size Used Avail Use% Mounted on
    /dev/xvda1 7.9G 6.5G 1.4G 84% / tmpfs 3.7G 0 3.7G 0% /dev/shm

    sysctl fs.file-nr fs.file-nr = 864 0 761182

  • sasher
    sasher over 11 years
    if I got the process, and I know the deleted file path, is there a way to free this floating space without restarting it?
  • sasher
    sasher over 11 years
    Managed to reclaim the space without a restart for the process holding it. 1) get to the holding process file descriptors path: cd /proc/`lsof|grep '<deleted_file>|awk '{print $2}'`/fd 2) find process fd link: ll | grep <deleted_file> 3) override with blank (all data will be lost) > <fd>
  • dmgig
    dmgig over 9 years
    in my case, it was mysqld holding open a massive log I thought I had cleaned out.m I restarted mysql and it freed up the space. Thx.