Why are partition size and df output different?
Solution 1
One reason the partition capacities can differ is that some space is reserved for root, in the event the partitions become full. If there is no space reserved for root, and the partitions become full, the system cannot function. However, this difference is usually of the order of 1%, so that does not explain the difference in your case. From the man page for df
If an argument is the absolute file name of a disk device node containing a mounted file system, df shows the space available on that file system rather than on the file system containing the device node (which is always the root file system).
So df is really showing the size of your filesystem, which is usually the size of the device, but this may not be true in your case. Does your filesystem extend over the whole of your partition?
Does
resize2fs /dev/sda1
make any difference? This command tries to increase your filesystem to cover the entire partition. But make sure you have a backup if you try this.
Solution 2
The main difference is because some things say 1 kilobyte is 1000 bytes, and others say 1 kilobyte is 1024 bytes.
Gnome Disk Utility shows the capacity using 1 kilobyte = 1000 bytes, because disk manufacturers describe disk sizes this way. This means your disk capacity is close to 154,000,000,000 bytes.
On the other hand, most operating systems say 1 kilobyte = 1024 bytes. All the tools like df
and fdisk
use this convention. So 154,000,000,000 bytes / 1024 / 1024 / 1024 = 143.4 GB.
As jlliagre rightly points out (and Gilles implies when asking for your fdisk
output), disk utility is telling you the size of your whole hard disk. But /dev/sda1
is a single partition on your hard disk. For example, your hard disk probably has some other partitions on it such as a 4-8 GB partition for swap (also known as virtual memory), and a boot partition which is usually around 100 MB.
You didn't post the output of fdisk -l /dev/sda
, so let's assume your swap partition is 8 GB. Now we're down to 135 GB.
Then, there are some other things that contribute to the difference.
For example, the file system uses some of the disk partition for metadata. Metadata is things like file names, file permissions, which parts of the partition belong to which files, and which parts of the partition are free. On my system, about 2% of the partition is used for this. Assuming yours is similar, it would bring the free space down to about 132 GB.
The file system can also reserve some space that can only be used by the root user. On my system, it's 5% of the partition, so in your case, it would mean a total capacity of about 125 GB.
The exact numbers depend on whether you are using ext2, ext3, ext4, fat, ntfs, btrfs, etc, and what settings were used when formatting the partition.
If you are using ext2 or ext3, sudo tune2fs -l /dev/sda1
can help understand where the space is going.
Solution 3
Probably they are used by inodes. Some amount may be used up by MBR.
Solution 4
sda1 isn't your whole disk but its first primary partition. You might have created other unmounted partitions that do not show up in df output or simply have sda1 not filling all usable space for some reason or having the filesystem not using all available space in its partition.
fdisk -l
will tell you what your partition table looks like.
Related videos on Youtube
xralf
Updated on September 18, 2022Comments
-
xralf over 1 year
I have a partition /dev/sda1.
Disk utility shows it has the capacity of 154 GB.
df -h showsFilesystem Size Used Avail Use% Mounted on /dev/sda1 123G 104G 14G 89% / devtmpfs 1006M 280K 1006M 1% /dev none 1007M 276K 1006M 1% /dev/shm none 1007M 216K 1006M 1% /var/run none 1007M 0 1007M 0% /var/lock none 1007M 0 1007M 0% /lib/init/rw
Why are the results different? Where are the missing 31 GB?
-
Gilles 'SO- stop being evil' about 13 yearsPlease post the output of
fdisk -l /dev/sda
(run as root). -
penguin359 about 13 yearsWhat filesystem are you using? If it's ext2/3/4 then you can use
tune2fs -l /dev/sda1
to examine it. Look at block count and block size and multiply them to get the filesystem size. Also,fdisk -s /dev/sda1
to get the partition size in 1k-blocks. Multiply that by 1024 to get the size in bytes. That number should only be slightly larger than the filesystem. On my 40GB ext4 partition, it's 3072 bytes larger. If your filesystem is oddly smaller, you can try resizing it. For ext2/3/4, useresize2fs /dev/sda1
. You can do this while using the computer normally. -
xralf about 13 years@Gilles sudo fdisk -l /dev/sda shows (I'm posting only the sda1 partition, because others don't interest us). Device Boot Start End Blocks Id System /dev/sda1 1 18706 150253568 83 Linux
-
xralf about 13 years@penguin359 I have ext4 filesystem. block count = 32668162, block size = 4096, 32668162 * 4096 = 133808791552, fdisk -s /dev/sda1 * 1024 = 153859653632. It seems to be oddly smaller. Can I resize it without loss of data? What caused that it is smaller?
-
Alen Milakovic about 13 years@xralf: resizing should be safe, I've never lost data doing it. It might be slightly safer to do it to an unmounted partition, with eg a live cd. But still, get yourself a backup first. Always get a backup before doing major sys admin stuff.
-
Alen Milakovic about 13 yearsOk, so some elementary arithmetic.
133808791552/ 153859653632 = 0.87
approx. However.123/154=0.80
approx. Can anyone explain that discrepancy? Is it space reserved for root? -
penguin359 about 13 yearsA file system may reserve some space (4kB to 64kB) at the beginning for a bootloader. There may be space not accounted for in that equation used by filesystem structure like inode and super blocks. There is space reserved for root that does not show up in df, but doe show up in the
block count * block size
formula. The default for mke2fs is 5% reserved for root. That's where your.87
vs..80
comes from. With that being said, on my 40GB Ext4 partition I get:(10488436*4096)/(41953747*1024) = .9999999284
. I bet after you resize you have.99
and.94
for block count and df formulas -
penguin359 about 13 years@Faheem My last comment should answer your question. As for resizing, I'd be a bad sysadmin if I failed to remind you to backup, Backup, BACKUP! Aside from that, I have resized my Ext3 partitions many times on-line with a live production system and never had any issues. Just do
resize2fs /dev/sda1
and watch the magic happen. -
Alen Milakovic about 13 years@penguin: Thanks for the clarification. And yes, backups are good. :-) Oh, and I'm not the OP, in case you thought that. :-)
-
-
xralf about 13 years>>Does your filesystem extend over the whole partition? I think so. It's ext4.<br> What should I backup? I don't have any secondary disk to make larger backup.
-
xralf about 13 yearsDisk utility shows exactly capacity 154 GB (153,859,653,632 bytes).
-
xralf about 13 years>> sudo tune2fs -l /dev/sda1 << Which parameters tell me where the space is going?
-
xralf about 13 yearsOf course. We're talking only about /dev/sda1 partition, not the whole disk.
-
jlliagre about 13 yearsThat wasn't obvious from your question as you refer to a disk utility without showing its output. You should edit it to make that clear. Telling how did you create the / file system would be useful too.
-
Alen Milakovic about 13 years@xralf: (Ok, this is off-topic in terms of this question, but...) If don't have a backup, set one up immediately. The only alternative of a good backup (and it is not a good alternative) is systematic use of a distributed version control system, and pushing it to some remote location. But of course, you can't put everything under version control eg. media. Please excuse if I am being a busybody.
-
Alen Milakovic about 13 years@xralf: I would post your output results in the question itself, it is easier to see, and read.
-
rvs about 13 yearsresize2fs is quite safe operation, there is no need to backup for it (if you have stable power and you use stable software). BTW, default reserved block count is 5% for ext* filesystems.
-
xralf about 13 yearsThe relevant part of the output of disk utility is "capacity of sda1 partition = 154 GB".
-
Alen Milakovic about 13 years@rvs: disagree re backup. You are right about the 5% at least on Debian. I was misremembering. But I don't know if that is standard across all distributions.
-
rvs about 13 years@Faheem Mitha: I agree that backup is required, I meant that there is no strong reason to do extra backup just for resize2fs.
-
Alen Milakovic about 13 years@rvs: Ok. I interpreted the poster as saying that he did not have any backup. Maybe I misunderstood.
-
jlliagre about 13 yearsPlease edit your original question to make that clear.
-
penguin359 about 13 years@Faheem 5% is default for mke2fs. *BSD used 8% as the default for their UFS1/2 filesystem. The BSD people claim that's partly so the filesystem driver can make good decisions about layout out data when the file system gets full and avoiding fragmentation. Running a filesystem more than 90% full can severely fragment and degrade performance.
-
David Tonhofer almost 10 yearsThe MBR does not appear as it is in cylinder 0 (not even in the first partition, which often starts at sector 2048). However, there are copies of the superblock. Still, it's the inodes. See also: unix.stackexchange.com/questions/13547/… and the commands "lsblk /dev/sdX" and "dumpe2fs -h /dev/sdX"
-
r---------k about 9 yearsresize2fs just got me 6GB on my home partition, I'm breathing ! fyi it is a kind of complicated partition table, and I moved a chunk of data a while ago from beginning to end, feeding said partition on the way, never noticed the filesystem hadn't resized to fit the device size o/
-
Arkemlar about 5 years
resize2fs
helped me, thanks!