Low-end hardware RAID vs Software RAID

7,499

Solution 1

A 10-20$ "hardware" RAID card is nothing more than a opaque, binary driver blob running a crap software-only RAID implementation. Stay well away from it.

A 200$ RAID card offer proper hardware support (ie: a RoC running another opaque, binary blob which is better and does not run on the main host CPU). I suggest to stay away from these cards also because, lacking a writeback cache, they do not provide any tangible benefit over a software RAID implementation.

A 300/400$ RAID card offering a powerloss-protected writeback cache is worth buying, but not for small, Atom-based PC/NAS.

In short: I strongly suggest you to use Linux software RAID. Another option to seriously consider is a mirrored ZFS setup but, with an Atom CPU and only 4 GB RAM, do not expect high performance.

For other information, read here

Solution 2

Go ZFS. Seriously. It's so much better compared to hardware RAID, and reason is simple: It uses variable size strips so parity RAID modes (Z1 & Z2, RAID5 & RAID6) equivalents are performing @ RAID10 level still being extremely cost-efficient. + you can use flash cache (ZIL, L2ARC etc) running @ dedicated set of PCIe lanes.

https://storagemojo.com/2006/08/15/zfs-performance-versus-hardware-raid/

There's ZFS on Linux, ZoL.

https://zfsonlinux.org/

Solution 3

Here is another argument for software on a cheap system.

Stuff breaks, you know this that is why you are using raid, but raid controllers also break, as does ram, processor, power-supply and everything else, including software. In most failures it is simple enough to replace the damaged component with an equivalent or better. Blow a 100w power-supply, grab a 150w one and get going. Similar with most components. However with a hardware raid there are now three exceptions to this pattern: raid controller, hard drives, and motherboard (or other upstream if not an expansion card).

Let's look at the raid card. Most raid cards are poorly documented, and incompatible. You cannot replace a card by company xyz with one by abc, as they store data differently (assuming you can figure out who made the card to begin with). The solution to this is to have a spare raid card, exactly identical to the production one.

Hard drives are not as bad as raid cards, but as raid cards have physical connectors to the drives you must use compatible drives and significantly larger drives may cause problems. Significant care is needed in ordering replacement drives.

Motherboards are typically more difficult than drives but less than raid cards. In most cases just verifying that compatible slots are available is sufficient but bootable raids may be no end of headaches. The way to avoid this problem is external enclosures, but this is not cheap.

All these problems can be solved by throwing money at the problem, but for a cheap system this is not desirable. Software raids on the other hand are immune to most (but not quite all) of these issues because it can use any block device.

The one drawback to software raid on a cheap system is booting. As far as I know the only bootloader that supports raid is grub and it only supports raid 1 which means your /boot must be stored on raid 1 which is not a problem as long as you are only using raid 1 and only a minor problem in most other cases. However grub itself (specifically the first stage boot block) cannot be stored on the raid. This can be managed by putting a spare copy on the other drives.

Solution 4

  1. As others have said, there's no benefit to hardware RAID, and various downsides. My main reasons for preferring software RAID is that it's simpler and more portable (and thus more likely to actually have a successful recovery from various failure scenarios).

  2. (Also as others have said) 3 disk RAID 5 is a really bad RAID scheme -- it's almost the worst of all worlds, with very little benefit. Sort of a compromise between RAID 0 and RAID 1, and slightly better than either of those, but that's about the only good thing to say about it. RAID has moved on to much better schemes, like RAID 6.

  3. My advice (hardware):

    • Get a 4-port SATA card for that PCI slot, bringing you to six total SATA ports -- one for a boot drive, and five for data drives. I see one for ~$15, advertised as hardware RAID, but you can just ignore those features and use it as plain SATA.

    • Get a small SSD for the boot drive. I know there's still the perception that "SSDs are too expensive", but it's barely true anymore, and not at all on the small end -- 120GB is way more than you'll need for this boot drive, and you can get one for ~$25.

    • An optional but really nice addition (if your PC case has 3x 5.25" drive bays) is to get a drive bay converter: you can turn 3 5.25" (optical) drive bays into 5 hot-swappable front-loading 3.5" (HDD) bays, so you wont have to take the machine apart (or even shut it down) to swap drives. (Search for "backplane 5 in 3".)

    • Use 5x whatever size HDDs in RAID 6 (dual redundancy, 3x drive size usable space).

  4. My advice (software): Look at OpenMediaVault for the OS / file-server software. It's an "appliance distro" perfect for exactly this kind of use -- Debian-based (actually a Linux port of the BSD-based FreeNAS) with everything pre-configured for a NAS server. It makes setting up and managing software RAID (as well as LVM, network shares, etc.) really simple.

Share:
7,499

Related videos on Youtube

Igor Z.
Author by

Igor Z.

Updated on September 18, 2022

Comments

  • Igor Z.
    Igor Z. over 1 year

    I want to build a low-end 6TB RAID 1 archive, on an old pc.

    MB: Intel d2500hn 64bit
    CPU: Intel Atom D2500
    RAM: 4GB DDR3 533 MHz
    PSU: Chinese 500W
    NO GPU
    1x Ethernet 1Gbps
    2x SATA2 ports
    1x PCI port
    4x USB 2.0
    

    I want to build a RAID1 archive on Linux (CentOS 7 I think, then I will install all I need, I think ownCloud or something similar), I will use it in my home local network.

    Is it better a 10-20$ raid PCI controller or a software RAID?

    If software raid is better, which should I choose on CentOS? Is it better to put the system on an external USB and using 2 disks on the connectors or should I put the system in one disk and then create RAID?

    If I would do a 3 disks RAID 5, should I choose hardware raid PCI or a simply PCI SATA connector?

    • Chopper3
      Chopper3 over 5 years
      Please don’t do R5, it’s dangerous
    • Tommiie
      Tommiie over 5 years
      Hasn't this question been answered before? E.g. serverfault.com/questions/214/raid-software-vs-hardware
    • Broco
      Broco over 5 years
      This is a question about opinions, you will find a lot of people rooting for software and a lot of people rooting for hardware. In my opinion it depends. The Linux Software RAID is well established and proved its worth over and over again but it creates a very light overhead (which is negligible, especially in RAID 1). RAID 5 should not be used if you value your data because of URE, see youtube.com/watch?v=A2OxG2UjiV4 Rule of thumb is, if you use RAID 1 and have the option between cheap hardware RAID and software RAID, go for software.
    • Lenniey
      Lenniey over 5 years
      @Tom These answers are ~9y old and the HW/SW-RAID matter changed quite a bit, I think. OP: In your case I'd mirror the disks in software-RAID1 including the CentOS installation.
    • Igor Z.
      Igor Z. over 5 years
      @Lenniey Thank you for the answer, i'm going for a raid 1 software, thank you
    • Mark
      Mark over 5 years
      A $20 card isn't hardware RAID. It's "hardware-assisted RAID" at best.
    • Guntram Blohm
      Guntram Blohm over 5 years
      If your hardware raid controller emits some magic smoke, and you can't get the exact same model again, the data on your disks is most probably unrecoverable. With a software raid, you just set up the same software raid somewhere else and plug your drives into it.
    • Tobia Tesan
      Tobia Tesan over 5 years
      @Mark are you sure? Because there are $60-$80 external RAID enclosures that appear for all intents and purposes as a single USB class compliant drive.
    • Mark
      Mark over 5 years
      @TobiaTesan, an external RAID controller at that price is a small ARM board running software RAID (usually Linux). I could build one using a Raspberry Pi and a 3D printer for $50 or so; someone producing them in bulk could certainly get the price even lower.
    • Tobia Tesan
      Tobia Tesan over 5 years
      @Mark sure, so no writeback cache and all - but still, it makes me suspect that you can get a (crappy) self-sufficient silicon implementation that does not rely on the host CPU for that price.
    • Tobia Tesan
      Tobia Tesan over 5 years
      Googling a bit seems to suggest that nearly all of those RAID enclosures run on this SoC. Probably good news compatibility-wise?
    • usr
      usr over 5 years
      People always claim that hardware RAID saves on CPU usage. But the CPU usage required to copy data around is almost zero. I cannot imagine CPU usage being an issue in software RAID.
    • JoL
      JoL over 5 years
      Probable duplicate of serverfault.com/questions/685289/…, which has a very good answer in my opinion.
    • Strepsils
      Strepsils over 5 years
      In CentOS, the best alternative for hardware RAID (and actually better) is ZFS in your particular case. LVM or MDADM is better for virtualization while ZFS is the best "software RAID" for archival data and storage for the files.
  • Igor Z.
    Igor Z. over 5 years
    Thanks, i will use mdadm, do you advice to put the system on an external usb and two disks used as memories or should I install the system and then create the raid adding the disk? Thanks
  • Josh
    Josh over 5 years
    I would normally agree wholeheartedly here but he only has 4GiB of RAM so ZFS may not perform optimally...
  • shodanshok
    shodanshok over 5 years
    @IgorZ.It's not clear to me how do you want to connect your drives. From your post, it seems you only have 2 SATA ports, so I would install the OS on a USB HDD or flash drive (if going the USB flash route, be sure to buy a pendrive with decent 4k random write performance).
  • shodanshok
    shodanshok over 5 years
    +1. Anyway, ZRAID is know for low IOPS compared to mirroring+striping: basically, each top-level vdev has the IOPS peformance of a single disk. Give a look here
  • nstenz
    nstenz over 5 years
    The way I set up my bootable RAID 1 was to create a /boot partition on each and a data partition on each for / (instead of dedicating the entire disk to the array). As long as you create a separate boot partition on each drive and run grub-install to each drive, they should all be bootable, and md should be able to mount the degraded array. I imagine it would work with flavors other than RAID 1 as well.
  • ilkkachu
    ilkkachu over 5 years
    RoC? SoC would be a system-on-a-chip, i.e. "a small computer", but what's an RoC?
  • BaronSamedi1958
    BaronSamedi1958 over 5 years
    POC. Proof of Concept?
  • Mark
    Mark over 5 years
    Last time I looked, ZFS required 1 GB of RAM per TB of RAID, so the OP doesn't have enough RAM. Has that changed?
  • shodanshok
    shodanshok over 5 years
    RoC means Raid on Chip. Basically, a marketing term to identify an embedded system running a RAID-related OS with hardware offload for parity calculation.
  • hildred
    hildred over 5 years
    @nstenz, you described my setup almost exactly. The data partition got raid6 and lvm and boot got raid 1.
  • Strepsils
    Strepsils over 5 years
    Agreed. ZFS is the best choice for the archival data. The performance always depends on the stripe size and the size of the blocks which will be written to the disk, so it is complicated to calculate it, but very easy to optimize the performance :) Moreover, ZFS was not designed for virtualization or intensive IO workload.
  • BaronSamedi1958
    BaronSamedi1958 over 5 years
    2Mark: That’s for deduplicated capacity.