Is Ceph possible to handle hardware RAID arrays (LUNs) as OSD drives?
You can doesn't mean you should. Mapping RAID LUNs to Ceph is possible, but you inject one extra layer of abstraction and kind of render at least part of Ceph functionality useless.
Similar thread on their mailing list:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-September/021159.html
Related videos on Youtube
![cilap](https://i.stack.imgur.com/y1TtW.jpg?s=256&g=1)
cilap
Updated on September 18, 2022Comments
-
cilap almost 2 years
I am pretty new to Ceph and try to find out if Ceph supports hardware level raid HBAs.
Sadly could not find any information. What I found is, that it is recommended to use plain disks for OSD. But this pushes the requirements to the PCIe, the interfaces of the disk to high bandwidths and the CPU requirements are very high.
Hardware RAID controllers have solved these requirements already and they provide high redundancy based on the setups without eating my PCIe, CPU or any other resources.
So my wished setup would be to have local RAID controller(s), which handle my in disk redundancy at controller level (Raid 5, raid 6) whatever RAID level I need. On top of what RAID LUNs I would like to use Ceph to do the higher level of replication between: host, chassis, rack, row, datacenter or whatever is possible or plannable in CRUSH
- Any experiences in that setup?
- Is it a recommended setup?
- Any in depth documentation for this hardware RAID integration?
-
cilap over 6 yearsdo you have facts with real CPU, mem and disk benchmarks compared to a hardware RAID benchmarks? With hardware RAID arrays I have low requirements on CPU and mem, since the hardware controller is taking care of it.
-
cilap over 6 yearscould you elaborate "render at least part of Ceph functionality useless" a bit more? Do not get the point
-
BaronSamedi1958 over 6 yearsThe whole idea of Ceph... OK, one of the main ideas! is to avoid managing "islands of storage" which are RAID LUNs.
-
John Mahowald over 6 yearsI don't. And you really would want to do your own benchmark anyway. Just note that CPUs do billions of cycles per second, and interconnects (PCIe) do billions of transfers a second. You're free to use a RAID controller, it just seems not necessary in a distributed storage node.
-
cilap over 6 yearsI am getting the requirements of Ceph, but still one major question is not answered. What are the requirements for the 36 drive chassis? Afaik you need 36 cores from the description of ceph for it. Also what config would you suggest for your example? What are the replication efforts and what is the benchmark of it?
-
cilap over 6 yearsjust forgot. Afaik your setup needs more instances or maybe even more servers for the management.
-
wazoox over 6 years@cilap it depends upon the needed performance really. You generally don't need 1 core per OSD, using about half the cores is enough. Performance of erasure coding is inferior to full replication.
-
wazoox over 6 yearsI didn't mention MDS as you'll them either way. depending upon your cluster charge, you may use the storage nodes as MDS and MON servers.