iSCSI Distributed RAID?

7,609

Solution 1

This has many drawbacks and not a single advantage I can see, so I don't understand why you would want to do this.

  • Any outage of the iSCSI connection will likely require a full RAID rebuild. The RAID subsystem doesn't know that the disc is the same if it sees it again and that it is more or less unaffected, but even if that would be the case, it doesn't keep a log of write operations since the failure that it could use to bring the drive up to date again.

  • The network connection will be a serious bottleneck, especially in case of a rebuild. You will have a small number (likely just one) 1 GBit/s network connections compared to multiple SATA/SAS connections with up to 6 GBit/s each, connected over the PCIe bus.

  • This whole setup is really delicate and easy to bring to a complete halt.

Solution 2

You cannot do what you're intended to do with a built-in software RAID from Microsoft. Reason is within Windows network stack comes earlier than storage stack so your RAID will always start in broken state and will do long and painful resyncs and rebalances and rechecks. You may however use some third-party software to aggregate iSCSI volumes into sort of a central storage. Can be iSCSI and you can connect to it in a loopback. Companies like FalconStor, DataCore and StarWind also do a business on building a unified storage pool from many many separate SAN and NAS boxes. DataCore and FalconStor are expensive as F and StarWind may do what you want even with a free version. Linux/FreeBSD with LIO is another alternative if you care. Good luck!

Solution 3

Just for your convenience here is a performance analysis report for Distributed iSCSI RAID architecture.

http://www.ele.uri.edu/tcca/camera_ready/Ben_iRAID-SNAPI-cr.pdf

Solution 4

I wouldn't do it this way. There are lots of manufactured NAS devices with multiple disk arrays both for home use and business use that have ISCSI built in. These devices are designed with this purpose in mind and will outrun the software configuration that you are thinking about at a comparable or cheaper cost.

Share:
7,609
NickC
Author by

NickC

SysAdmin, Hardware Supply & Software Development - London/Essex, England.

Updated on September 18, 2022

Comments

  • NickC
    NickC almost 2 years

    Thinking about setting-up as distributed RAID array over iSCSI. Has anyone else tried this, if so what was your experience.

    To be more specific I am thinking of a couple of CentOS servers each with say four drives, all published as iSCSI targets. Then one Windows Server accessing all of these via iSCSI and connecting those targets together to create a software RAID array.

    One of my concerns is the rebuild time if one of those servers is offline for a short amount of time. Would it then be necessary to rebuild the entire array from scratch or is software RAID clever enough to only rewrite sectors that have changed? My concern being that a small network glitch could otherwise potentially cause a long rebuild process.

    Thanks, Nick

  • NickC
    NickC over 11 years
    The whole idea is to create a robust distributed RAID environment using iSCSI targets so that there is not a singular point of failure. By the sound of the comments here iSCSI might not be the best way to do this. To me the advantage of iSCSI is that it lets me use Linux servers to store NTFS based data for our Windows users.
  • FooBee
    FooBee over 11 years
    This can be done via iSCSI, but doing it reliable will involve a lot more than your somewhat simplistic approach. If it needs good performance, it will be expensive as well (think multiple 10 GB ethernet connections/switches or even Infiniband to reduce latency). Other approaches might be more appropriate and cheaper to reach that goal.
  • JamesRyan
    JamesRyan over 8 years
    No that is not valid. What you will have just done is make a really fragile system. The A in HA stands for availability. Something that fails often is not enough even if it fails safe.
  • a.atlam
    a.atlam about 7 years
    for anyone reading, its theoretically valid in a lab setup (though no one experiments with 72 TB) ... however when experience comes in, I will safely assume that a 100 TB range storage is not a home setup, also is not exactly entry level enterprise storage and whoever will fund this must have something important to store. In that case, all technical opinions aside, GET THE RIGHT TOOL ... whether vendor or open source, just get something designed and sized correctly ... also crossing the 100 TB range, you might wanna drop iSCSI and take FC as the cost/GB will start to become relatively close