btrfs: not enough free disk space, but device not fully used
You're using data raid0, which means striped without parity. Once you fill ANY disk in a raid0 array, the array is full because you no longer have room on that disk to write its piece of a stripe.
That ~3TB device is just too much larger than the other devices you have to make full use of it in a btrfs-raid0 practical. To try to force the system to use the whole disk, you'd end up needing to partition it and then add both partitions as separate disks. DON'T DO THAT, by the way, as it will do weird and awful things to performance, which I would assume is pretty critical to you if you're using raid0...?
Another note: 3.2 is a pretty ancient kernel to be running btrfs IMO. Btrfs is still in HEAVY development, and you really should be tracking much newer kernels if you're going to run btrfs.
Using Btrfs with Multiple Devices - Filesystem creation: When you have drives with differing sizes and want to use the full capacity of each drive, you have to use the single profile for the data blocks, rather than raid0:
# Use full capacity of multiple drives with different sizes (metadata mirrored, data not mirrored and not striped)
mkfs.btrfs -d single /dev/sdb /dev/sdc
Related videos on Youtube
Guss
I'm a self taught software developer, system administrator and all-round code-guy. I've been doing software development, QA, developer support and system and even some graphic design work for as long as I can remember (going back 30 years), both on commercial projects as well as open source and free software, and I enjoy both. Coding is fun, that's why it is worth doing - I hope it never becomes a chore :-)
Updated on September 18, 2022Comments
-
Guss over 1 year
I'm using btrfs for my home directory, which spans multiple devices. In total I should have around 7.3TB of space - and that's what
df
shows, but I ran out of space after using only 5.7TB of data:# df -h Filesystem Size Used Avail Use% Mounted on /dev/sdd3 7.3T 5.7T 63G 99% /home
btrfs has this to say for itself:
# btrfs fi df /home Data, RAID0: total=5.59TB, used=5.59TB System, RAID1: total=8.00MB, used=328.00KB System: total=4.00MB, used=0.00 Metadata, RAID1: total=11.50GB, used=8.22GB
Which is weird, because there should have been enough partitions to support 7.3TB (also, the btrfs data configuration should have been "single" and not RAID0).
Here is what
btrfs show
says:# btrfs fi show Label: none uuid: 2dd4a2b6-c672-49b1-856b-3abdc12d56a5 Total devices 9 FS bytes used 5.59TB devid 2 size 303.22GB used 303.22GB path /dev/sdb1 devid 3 size 303.22GB used 303.22GB path /dev/sdb2 devid 4 size 325.07GB used 324.50GB path /dev/sdb3 devid 1 size 2.73TB used 1.11TB path /dev/sdc1 devid 5 size 603.62GB used 589.05GB path /dev/sdd1 devid 6 size 632.22GB used 617.65GB path /dev/sdd2 devid 7 size 627.18GB used 612.61GB path /dev/sdd3 devid 8 size 931.51GB used 931.51GB path /dev/sde1 devid 9 size 931.51GB used 931.51GB path /dev/sde2
As you can see, devid 1 (which is the last disk I added) has only 1.11TB used out of 2.73TB available in the partition (its a supposedly 3TB drive, but only 2.7TB usable :-[ ).
I've searched far and wide but couldn't figure out how to make btrfs use more of the partition. What am I missing?
Notes:
- I'm using Ubuntu 12.04.2 with the current kernel 3.2.0-23.
- This is after I've ran
btrfs fi resize max /home
andbtrfs fi balance /home
-
Terry Wang about 11 years1st, for
btrfs
, never trust df/di output. You are supposed to be usingbtrfs filesystem df /path
. 2nd, it is important to let others know the btrfs file system was created, I mean, for your/home
. For example, number of block devices, how metadata (RAID 1 whichi is default) and data (RAID 0 from what I can see) span across devices. 3. Try to keep a minimum number of snapshots, because they silently consume your disk spaces (Copy-on-Write...). -
Guss about 11 years@TerryWang: 1st+2nd: you can see the output of
btrfs fi df
in the question. Also, the filesystem in question has no snapshots. -
Terry Wang almost 11 yearsGuss, I came across this kernel patch when using ksplice uptrack to patch my VPS. I think this may be related to your issue.
Install [3fyotdy2] Btrfs filesystem reports no free space when there is.
-
Terry Wang almost 11 yearsI cannot find any further info either. On that system it was running
3.2.0-41-generic
, kspliceuptrack
automatically (I set it to be) applied the kernel patch to it. If you are running3.2.0-44-generic
it should have included the fix. -
Guss almost 11 yearsThen its probably not relevant - I was running 3.5. Anyway, this question has become moot for me - it took me about a month but I rebuilt the pool on Ubuntu 13.04 with kernel 3.8 and it currently works fine.
-
bain about 10 yearsDuplicate of btrfs and missing free space
-
Guss about 10 years@bain, while the scenario in #170044 looks similar, the output from
btrfs fi df
is completely different, so the answer in #170044 (that relies on that piece of data) is not applicable here. I was familiar with #170044 and still decided to ask this question. -
bain about 10 yearsSorry, you are right it is a different issue.
-
Guss about 10 yearsActually, I'm not using raid0 - I'm using "single". I'm not sure why
btrfs df
says raid0. Performance is not that important, but as I understand it the options for btrfs setup are "single" (like raid0 but without stripping), raid0, raid1 (loose half of your storage for dup data) and raid5 (doesn't actually work). So the choice of raid0 isn't that surprising. -
Guss about 10 yearsBTW - I eventually rebuilt the file system on current Ubuntu stable, which uses 3.11 kernel - and now I no longer have this problem.
-
bain about 10 yearsPerhaps you forgot to use mkfs.btrfs -d single to format each drive?
-
bain about 10 yearsI think Jim got it right. To use multiple drives with different sizes you need to format them all together with
mkfs.btrfs -d single /dev/sda /dev/sdb /dev/sdc /dev/sdd...
. If you do this,btrfs fi df
will showsingle
and notRAID0
. The fact that the output in the question shows the replication as beingRAID0
and notsingle
indicates that this was likely the issue.