df shows negative values for used?
Solution 1
I think it is a file system corruption. You should unmount the partition and run a fsck.
Check also the logs and the console for any file system errors.
Solution 2
I think this might mean that you have gone beyond what is reserved as a root only space (Default is 5% on ext3 I think):
$ sudo tune2fs -l /dev/sda1 | grep -i 'Reserved block count'
Reserved block count: 1877194
Reserved block count is a certain amount of blocks that only the root user can use after the disk is almost full (This prevents a normal user from filling up the fs and causing things to break). From man tune2fs
:
-m reserved-blocks-percentage
Set the percentage of the filesystem which may only be allocated by privileged processes. Reserving some number of filesystem blocks for use by privileged processes is done to avoid filesystem fragmentation, and to allow system daemons, such as syslogd(8), to continue to function correctly after non-privileged processes are prevented from writing to the filesystem. Normally, the default percentage of reserved blocks is 5%.
So I think something is taking up space fast as the root user. You can use du -hcs /
and drill down from there to find where the files are that are using the space. If you think it might be something creating large files, you could also use the find
command.
Related videos on Youtube
GriffinHeart
Updated on September 17, 2022Comments
-
GriffinHeart over 1 year
I have a CentOS 5.2 server and running
df -h
I get this:Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 672G -551M 638G 0% / /dev/hda1 99M 12M 82M 13% /boot tmpfs 2.0G 0 2.0G 0% /dev/shm
that space wasn't even near 10% usage the last time it showed a correct value. I'm at a loss with whats going on.
EDIT #1
Ok so I had to reboot the server because SSHD hanged up, I'm guessing it was related to this.
Some new info, after rebooting,
df -h
showed 12Gb used (2%), but if I rundu -hcs /
it shows 46Gb total, theres a big difference here.EDIT #2
After about 15mins of uptime
df -h
is showing negative values again:Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 672G -24G 660G - /
EDIT #3
More info, ran a
fsck
and this is the output:Checking all file systems. [/sbin/fsck.ext3 (1) -- /] fsck.ext3 -f -y /dev/VolGroup00/LogVol00 Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/VolGroup00/LogVol00: 204158/181633024 files (1.3% non-contiguous), 9224806/181633024 blocks [/sbin/fsck.ext3 (1) -- /boot] fsck.ext3 -f -y /dev/hda1 Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /boot: 34/26104 files (5.9% non-contiguous), 15339/104388 blocks
-
GriffinHeart almost 14 yearsso this logical group is mounted on /, is it possible to safely run a fsck without having physical access?
-
Mircea Vutcovici almost 14 yearsyou have to stop all processes having files opened for writing and then remount / read-only. So you have to stop pretty everything. I would experiment this on a VM with the same OS installed. To mount the root file system read-only:
mount / -o remount,ro
. After you run the fsck with file system mounted read-only you have to remount it read-write and start the daemons, or better just reboot/reset. -
GriffinHeart almost 14 yearsi've ran fsck and from the output it doesn't seem anything is wrong, i'll update with the log
-
Mircea Vutcovici almost 14 yearsMay be you have a hardware problem. Run a
memtest86+
. For this you will have to reboot the server. And the downtime will be a few good hours. Run other hardware tests... for CPU, etc... Check the temperatures on the server.