How long does it take to rebuild a drive in a RAID 6?

16,259

Solution 1

Rebuild times are highly subjective to the hardware specifics (RAID level, interface, drive size, rotational speed, driver and firmware quality) as well as system load, disk sizes, and disk utilization. This makes it very difficult to provide a valuable estimate.

With that said, running a RAID 6 array, your performance hit while rebuilding just a single drive should be minimal. (The impact would be greater if rebuilding a RAID 1 or RAID 5 array.)

I know that this doesn't really answer your question, but it's the best I have.

Solution 2

The rebuild times will depend on the load on the box, the amount of resources provided to the rebuild process and some other tunables. Actually, instead of using raid-6, you might want to consider using raid5 plus a hotspare. The rebuild will take less time (not by a great degree, but still), but you'll be limited to being able to lose only one drive at a time. For fast rebuilds, you are much better off with raid 10

Solution 3

As an example I have a NAS running

G1830
8GB RAM
Areca 1220
Card set to 80% background tasks for expansion.

8 x 2TB Samsung 5900 RPM disks.

I just expanded from 7 to 8 drives, which took 21 hours. I imagine that a 7 disk rebuild would take about that with similar specs.

It really comes down to the speed of the CPU on the RAID card, and the individual disk throughput. If you run weekly volume checks, long rebuilds are no issue as you can be almost 100% sure there are no issues on all the disks.

Share:
16,259

Related videos on Youtube

Admin
Author by

Admin

Updated on September 17, 2022

Comments

  • Admin
    Admin almost 2 years

    I'm building a 7-disk RAID 6 array on a DELL MD3000 DAS box. My top priority is storage space, so I'd like to use 2TB drives -- but I'm worried about how long it will take to rebuild a failed disk.

    Is there a formula for figuring out how long a drive rebuild will take when the array is offline? online?

  • Posipiet
    Posipiet almost 14 years
    Raid 5 with a hotspare and automatic rebuild on an array of this size is a recipe for disaster. Once the first disk fails, the others are probably not tip-top anymore either, and the chances of another disk failing during rebuild are noticeable. The times of automatic rebuild are over. You will want to check your backup before the rebuild. To be safe with that, you need Raid 6 or Raid 10.
  • Posipiet
    Posipiet almost 14 years
    What is the connection between File System and Raid Rebuild time when a hardware raid is used?
  • David Corsalini
    David Corsalini almost 14 years
    I'm just providing the possibilities here. If you read my message through, I do mention the danger of raid5, and the option of using raid10 for faster rebuilds without redundancy loss
  • TomTom
    TomTom almost 14 years
    There is none. Clueless poster.
  • Admin
    Admin almost 14 years
    Thanks, gWaldo. My concern wasn't the performance hit, it's the vulnerability to a second disk failure while the first is rebuilding. If the rebuild takes an hour, I feel OK, but if it takes 50 hours...
  • Gilles 'SO- stop being evil'
    Gilles 'SO- stop being evil' almost 14 years
    @TomTom and @Posipiet: zfs incorporate RAID features, so it could actually be relevant in this context.
  • TomTom
    TomTom almost 14 years
    No, TFS may have raid features, but the question is about a RAID rebuild on the disc level.
  • gWaldo
    gWaldo almost 14 years
    RAID 6 can take a 2-drive hit and still be functional. RAID5 can only take 1 drive down. You would need to lose two more disks!