btrfs: not enough free disk space, but device not fully used

6,748

You're using data raid0, which means striped without parity. Once you fill ANY disk in a raid0 array, the array is full because you no longer have room on that disk to write its piece of a stripe.

That ~3TB device is just too much larger than the other devices you have to make full use of it in a btrfs-raid0 practical. To try to force the system to use the whole disk, you'd end up needing to partition it and then add both partitions as separate disks. DON'T DO THAT, by the way, as it will do weird and awful things to performance, which I would assume is pretty critical to you if you're using raid0...?

Another note: 3.2 is a pretty ancient kernel to be running btrfs IMO. Btrfs is still in HEAVY development, and you really should be tracking much newer kernels if you're going to run btrfs.

Using Btrfs with Multiple Devices - Filesystem creation: When you have drives with differing sizes and want to use the full capacity of each drive, you have to use the single profile for the data blocks, rather than raid0:

# Use full capacity of multiple drives with different sizes (metadata mirrored, data not mirrored and not striped)  
mkfs.btrfs -d single /dev/sdb /dev/sdc
Share:
6,748

Related videos on Youtube

Guss
Author by

Guss

I'm a self taught software developer, system administrator and all-round code-guy. I've been doing software development, QA, developer support and system and even some graphic design work for as long as I can remember (going back 30 years), both on commercial projects as well as open source and free software, and I enjoy both. Coding is fun, that's why it is worth doing - I hope it never becomes a chore :-)

Updated on September 18, 2022

Comments

  • Guss
    Guss over 1 year

    I'm using btrfs for my home directory, which spans multiple devices. In total I should have around 7.3TB of space - and that's what df shows, but I ran out of space after using only 5.7TB of data:

    # df -h
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sdd3       7.3T  5.7T   63G  99% /home
    

    btrfs has this to say for itself:

    # btrfs fi df /home
    Data, RAID0: total=5.59TB, used=5.59TB
    System, RAID1: total=8.00MB, used=328.00KB
    System: total=4.00MB, used=0.00
    Metadata, RAID1: total=11.50GB, used=8.22GB
    

    Which is weird, because there should have been enough partitions to support 7.3TB (also, the btrfs data configuration should have been "single" and not RAID0).

    Here is what btrfs show says:

    # btrfs fi show
    Label: none  uuid: 2dd4a2b6-c672-49b1-856b-3abdc12d56a5
        Total devices 9 FS bytes used 5.59TB
        devid    2 size 303.22GB used 303.22GB path /dev/sdb1
        devid    3 size 303.22GB used 303.22GB path /dev/sdb2
        devid    4 size 325.07GB used 324.50GB path /dev/sdb3
        devid    1 size 2.73TB used 1.11TB path /dev/sdc1
        devid    5 size 603.62GB used 589.05GB path /dev/sdd1
        devid    6 size 632.22GB used 617.65GB path /dev/sdd2
        devid    7 size 627.18GB used 612.61GB path /dev/sdd3
        devid    8 size 931.51GB used 931.51GB path /dev/sde1
        devid    9 size 931.51GB used 931.51GB path /dev/sde2
    

    As you can see, devid 1 (which is the last disk I added) has only 1.11TB used out of 2.73TB available in the partition (its a supposedly 3TB drive, but only 2.7TB usable :-[ ).

    I've searched far and wide but couldn't figure out how to make btrfs use more of the partition. What am I missing?

    Notes:

    1. I'm using Ubuntu 12.04.2 with the current kernel 3.2.0-23.
    2. This is after I've ran btrfs fi resize max /home and btrfs fi balance /home
    • Terry Wang
      Terry Wang about 11 years
      1st, for btrfs, never trust df/di output. You are supposed to be using btrfs filesystem df /path. 2nd, it is important to let others know the btrfs file system was created, I mean, for your /home. For example, number of block devices, how metadata (RAID 1 whichi is default) and data (RAID 0 from what I can see) span across devices. 3. Try to keep a minimum number of snapshots, because they silently consume your disk spaces (Copy-on-Write...).
    • Guss
      Guss about 11 years
      @TerryWang: 1st+2nd: you can see the output of btrfs fi df in the question. Also, the filesystem in question has no snapshots.
    • Terry Wang
      Terry Wang almost 11 years
      Guss, I came across this kernel patch when using ksplice uptrack to patch my VPS. I think this may be related to your issue. Install [3fyotdy2] Btrfs filesystem reports no free space when there is.
    • Terry Wang
      Terry Wang almost 11 years
      I cannot find any further info either. On that system it was running 3.2.0-41-generic, ksplice uptrack automatically (I set it to be) applied the kernel patch to it. If you are running 3.2.0-44-generic it should have included the fix.
    • Guss
      Guss almost 11 years
      Then its probably not relevant - I was running 3.5. Anyway, this question has become moot for me - it took me about a month but I rebuilt the pool on Ubuntu 13.04 with kernel 3.8 and it currently works fine.
    • bain
      bain about 10 years
    • Guss
      Guss about 10 years
      @bain, while the scenario in #170044 looks similar, the output from btrfs fi df is completely different, so the answer in #170044 (that relies on that piece of data) is not applicable here. I was familiar with #170044 and still decided to ask this question.
    • bain
      bain about 10 years
      Sorry, you are right it is a different issue.
  • Guss
    Guss about 10 years
    Actually, I'm not using raid0 - I'm using "single". I'm not sure why btrfs df says raid0. Performance is not that important, but as I understand it the options for btrfs setup are "single" (like raid0 but without stripping), raid0, raid1 (loose half of your storage for dup data) and raid5 (doesn't actually work). So the choice of raid0 isn't that surprising.
  • Guss
    Guss about 10 years
    BTW - I eventually rebuilt the file system on current Ubuntu stable, which uses 3.11 kernel - and now I no longer have this problem.
  • bain
    bain about 10 years
    Perhaps you forgot to use mkfs.btrfs -d single to format each drive?
  • bain
    bain about 10 years
    I think Jim got it right. To use multiple drives with different sizes you need to format them all together with mkfs.btrfs -d single /dev/sda /dev/sdb /dev/sdc /dev/sdd.... If you do this, btrfs fi df will show single and not RAID0. The fact that the output in the question shows the replication as being RAID0 and not single indicates that this was likely the issue.