Can RAID 1 have more than two drives?

75,648

Solution 1

You can use as many drives as you want for RAID1. They will all be mirrored, and written on at the same time, and be exact copies of each other. The fact that there isn't a card that do more than x drives doesn't meant anything about the concept. RAID1 is just mirroring your disks, and you can have as many mirrors as you want.

Also, your view of RAID5/6 is erroneous. The parity is distributed on all the drives, there isn't a dedicated drive for that. Compared to raid5, raid6 adds an additional parity block, which is also distributed.

You can find more info on wikipedia.

Solution 2

There is a lot of misunderstanding of RAID levels.

JBoD is Just a Bunch of Drives, where you can see multiple drives in the same box, this is a most confused non-raid term.

Years ago, some RAID manufacturers could not make a truly JBOD with their RAID engine, they call SPAN (BIG) as JBoD.

RAID1 is a Mirror RAID and it needs TWO HDDs to mirror each other. Whereas CLONE is a Multiple Duplicate HDD with the same volume, for example DAT Optic's eBOX, sBOX (hardware RAID). Hardware RAID boxes generally offer RAID 0, 1, 5, CLONE, Large, and Hot spare.

As for RAID 5/6, both have the parity space portion equal to one drive for RAID5 and two drives for RAID6.

The most common mistaken knowledge is that parity data is located in a dedicated drive(s). That is incorrect. The party space is divided equally among the RAID member HDDs.

Example: RAID5 from five HDD, each of the drives will have 1/5 of space allocated for parity, whereas for RAID6, each drive will have 2/5 of space allocated for parity.

For those who want to argue, if there is a dedicated parity drive(s), let's assume there is, what happens to the RAID if the dedicated parity drive fails? The RAID can not be rebuilt because the data needed to rebuild is no longer there.

Solution 3

I've worked with some LenovoEMC PX4-something NAS which had 4 or 12 disks. The first 50 GB of each drive was used as a raid1 for the OS, and the rest of each disk was for user data.

So it has a 4 or 12-way raid1 for the root drive, and a small swap file on this drive. So yes its totally possible and workable, and used in production by commercial solutions.

As long as at least one disk still worked then it would boot and network. The NAS needed to boot off a USB drive if you changed all the disks, to reinstall the base OS.

Here's the 4 bay NAS rebuilding after a disk swap, so no sdd

root@px4-300r-THYAQ42E9:/nfs/# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md0 : active raid1 sde1[4] sdc1[1] sda1[3] sdb1[2]
      20964480 blocks super 1.1 [4/3] [UUU_]
      [===========>.........]  recovery = 58.1% (12188416/20964480) finish=7.2min speed=21337K/sec

md1 : active raid5 sde2[4] sdc2[1] sda2[3] sdb2[2]
      5797200384 blocks super 1.1 level 5, 512k chunk, algorithm 2 [4/3] [_UUU]
Share:
75,648

Related videos on Youtube

Mad_piggy
Author by

Mad_piggy

Updated on September 18, 2022

Comments

  • Mad_piggy
    Mad_piggy almost 2 years

    Recently I had a discussion with a teacher of mine. He was claiming that you could set up RAID 1 with five drives and that the data would be mirrored over all of these drives.

    I told him a RAID 1 with 5 drives wouldn't work like that. It would be a RAID 1 with two drives and would use the other three drives as hot spare.

    He also said that RAID 6 is identical to RAID 5 but you can place all the parity checks on the same drive. I thought RAID 6 was a RAID 5-like solution where two drives where used for parity.

    Who's right, then?

  • Mad_piggy
    Mad_piggy over 11 years
    I never had a raid-card that could handle raid 1 with more then 2 drives. so ... And what is wrong with my raid-6??? I was trieing to say that raid-5 has one drive for its parity, and raid-6 has 2 drives for parity. As wikipedia says: RAID 5: Block-level striping with distributed parity. RAID 6: Block-level striping with double distributed parity.
  • m4573r
    m4573r over 11 years
    I'll update my answer.
  • BeowulfNode42
    BeowulfNode42 over 10 years
    I've seen an example of mdadm (linux software raid) using 8 drives in a raid 1, or rather the first small partition on 8 drives as a raid 1. This stored the system drive. The big partition on each drive was than grouped in to a RAID 6 array. I've not seen a linux distro that will boot from a software raid 5 or 6.
  • Criggie
    Criggie over 5 years
    The /proc/mdstat output was found in an old email - devices are long gone to hardware afterlife, so I can't run a hdparm or bonnie test easily, sorry.
  • Makyen
    Makyen over 5 years
    Note that your last comment saying that RAID5 with a dedicated parity drive could not recover from a drive failure is incorrect. Even if RAID5 was implemented with the parity information entirely on one drive, it would still be able to recover from the failure of any one drive. If your argument was true, then that would mean that with distributed parity, 1/5th of your data would be unrecoverable when any drive failed, because you lost the parity information that was on 1/5th of that drive. That argument is just wrong.
  • David Schwartz
    David Schwartz over 5 years
    "RAID5 with a dedicated parity drive" is RAID 4. The difference between RAID 4 and RAID 5 is that RAID 4 has a dedicated parity drive and RAID 5 has parity distributed across all disks. If the dedicated parity drive fails on a RAID 4 configuration, the parity can be reconstructed from the data, just as would happens to all the parity lost on a failed drive of a RAID 5 array.
  • theking2
    theking2 over 3 years
    The reason that RAID 4 is not used is exactly the dedicated parity drive. Each write to the set would result in a write to the parity drive causing extra wear on exactly that drive. Using RAID5 will distributed the writes evenly over all devices.