How to convert a btrfs file system in raid1 mode to raid0?

12,443

Solution 1

btrfs balance start -dconvert=raid0 /

That's all you need to do. The system will busily move the existing data around to be raid0 (striped, no parity) and any further data will be written that way as well.

METAdata will still be written in duplicate. If you want to live EXTRA SUPER dangerously, feel free to tack on an -mconvert raid0 argument into the above command as well, then both data AND metadata on the array at / will be converted to raid0.

Solution 2

You read the right manual; you do need to run balance after adding the drive in order to restripe your data across the new disk. It also happens to convert any DUP chunks to RAID1, and the default mkfs options use RAID1 or DUP for metadata depending on whether you have multiple disks or not. There is not currently a supported way to convert back, but there are some restriper patches floating around on the btrfs mailing list that eventually will allow this sort of thing.

Share:
12,443

Related videos on Youtube

Guss
Author by

Guss

I'm a self taught software developer, system administrator and all-round code-guy. I've been doing software development, QA, developer support and system and even some graphic design work for as long as I can remember (going back 30 years), both on commercial projects as well as open source and free software, and I enjoy both. Coding is fun, that's why it is worth doing - I hope it never becomes a chore :-)

Updated on September 18, 2022

Comments

  • Guss
    Guss almost 2 years

    I've install Ubuntu 11.10 with btrfs as the / file system (was a bit of a mess to do, I'll explain if people are interested) so I can expand the primary file system into the second drive on the system(*).

    After installing the system, I ran btrfs device add /dev/sdb1 / and it added the new device and expanded the file system on to it, and all was good. But according to the (wrong) manual I was reading, I also had to run btrfs filesystem balance and this apparently converted my filesystem to "raid1" mode so everything is stored redundantly twice - once on each drive, and I can only use 50% of my total capacity:

    $ btrfs filesystem df /
    Data, RAID0: total=78.00GB, used=41.57GB
    System, RAID1: total=8.00MB, used=16.00KB
    System: total=4.00MB, used=0.00
    Metadata, RAID1: total=3.75GB, used=355.06MB
    

    Its a nice feature, but I was kind of wanting to use "raid0" (stripping). I've tried to remove the new device so I can re-add it, but when I try that I get an error and syslog has this:

    btrfs: unable to go below two devices on raid1
    

    So my question is: how can I convert my filesystem back to raid0 so I can use the total space of both disks?

    (*) Like can be done with LVM, but with btrfs you can host multiple "partitions" on the same "file system" and space is allocated dynamically where you need it - unlike in LVM.

    • Admin
      Admin over 12 years
      Your data is raid0; the metadata is raid1. Read the output of 'btrfs filesystem df /' and 'btrfs filesystem show' carefully.
    • Admin
      Admin over 12 years
      I see - so you say that the file system uses RAID 0 and has no redundancy for the data. So why does df looks like this: Filesystem Size Used Avail Use% Mounted on /dev/sda3 443G 52G 240G 18% / ?? As you can see, only 240G are available from a 443G volume even though only 52G are actually in use.
    • Admin
      Admin over 10 years
      When using btrfs, do not use or trust df.