Is Ceph possible to handle hardware RAID arrays (LUNs) as OSD drives?

5,464

You can doesn't mean you should. Mapping RAID LUNs to Ceph is possible, but you inject one extra layer of abstraction and kind of render at least part of Ceph functionality useless.

Similar thread on their mailing list:

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-September/021159.html

Share:
5,464

Related videos on Youtube

cilap
Author by

cilap

Updated on September 18, 2022

Comments

  • cilap
    cilap almost 2 years

    I am pretty new to Ceph and try to find out if Ceph supports hardware level raid HBAs.

    Sadly could not find any information. What I found is, that it is recommended to use plain disks for OSD. But this pushes the requirements to the PCIe, the interfaces of the disk to high bandwidths and the CPU requirements are very high.

    Hardware RAID controllers have solved these requirements already and they provide high redundancy based on the setups without eating my PCIe, CPU or any other resources.

    So my wished setup would be to have local RAID controller(s), which handle my in disk redundancy at controller level (Raid 5, raid 6) whatever RAID level I need. On top of what RAID LUNs I would like to use Ceph to do the higher level of replication between: host, chassis, rack, row, datacenter or whatever is possible or plannable in CRUSH

    1. Any experiences in that setup?
    2. Is it a recommended setup?
    3. Any in depth documentation for this hardware RAID integration?
  • cilap
    cilap over 6 years
    do you have facts with real CPU, mem and disk benchmarks compared to a hardware RAID benchmarks? With hardware RAID arrays I have low requirements on CPU and mem, since the hardware controller is taking care of it.
  • cilap
    cilap over 6 years
    could you elaborate "render at least part of Ceph functionality useless" a bit more? Do not get the point
  • BaronSamedi1958
    BaronSamedi1958 over 6 years
    The whole idea of Ceph... OK, one of the main ideas! is to avoid managing "islands of storage" which are RAID LUNs.
  • John Mahowald
    John Mahowald over 6 years
    I don't. And you really would want to do your own benchmark anyway. Just note that CPUs do billions of cycles per second, and interconnects (PCIe) do billions of transfers a second. You're free to use a RAID controller, it just seems not necessary in a distributed storage node.
  • cilap
    cilap over 6 years
    I am getting the requirements of Ceph, but still one major question is not answered. What are the requirements for the 36 drive chassis? Afaik you need 36 cores from the description of ceph for it. Also what config would you suggest for your example? What are the replication efforts and what is the benchmark of it?
  • cilap
    cilap over 6 years
    just forgot. Afaik your setup needs more instances or maybe even more servers for the management.
  • wazoox
    wazoox over 6 years
    @cilap it depends upon the needed performance really. You generally don't need 1 core per OSD, using about half the cores is enough. Performance of erasure coding is inferior to full replication.
  • wazoox
    wazoox over 6 years
    I didn't mention MDS as you'll them either way. depending upon your cluster charge, you may use the storage nodes as MDS and MON servers.