InfiniBand Storage
Solution 1
Although it is possible to run iSCSI over InfiniBand via IPoIB, the iSER and SRP protocols yield significantly better performance on an InfiniBand network. An iSER implementation for Linux is available via the tgt project and an SRP implementation for Linux is available via the SCST project. Regarding Windows support: at this time there is no iSER initiator driver available for Windows. But an SRP initiator driver for Windows is available in the winOFED software package (see also the openfabrics.org website).
Solution 2
So... the thing that most people don't really think about is how Ethernet and IB deliver packets. On one hand, Ethernet is really easy, and it's everywhere. But packet management is not auto-magic nor is it guaranteed-delivery. Granted, modern switching is excellent! Packet loss is no longer the problem that it was way-back-when. However, if you really push the Ethernet, you will start to see packets looping around in there. It's like they don't really know where to go. Eventually, the packets get to where they are supposed to go, but the latency caused by looping has already happened. There IS NO WAY to coax packets to go where they are supposed to.
Infiniband uses guaranteed delivery. Packets and packet delivery is actively managed. What you will see is that IB will peak in performance and then occassionally drop like a square-sine. The drop is over in milliseconds. Then the performance peaks again.
Etherenet peaks out as well, but struggles when use is high. Instead of a square-sine it drops off and then takes a while to step-back-up to peak performance. It looks like a stair on the left side and a straight drop on the right.
That's a problem in large data centers where engineers choose Ethernet over IB because it's easy. Then, the database admins and storage engineers fight back and forth, blaming each other for performance problems. And, when they turn to the network team for answers, the problem gets skirted because most tools see that the "average" network use isn't at peak performance. You have to be watching the packets in order to see this behavior.
Oh! There is one other reason to pick IB over Ethernet. Each IB(FDR) port can go 56 Gb/s. You have to bond (6) 10Ge ports per 1 IB port. That means A-LOT-LESS cabling.
By the way... when you're building financial, data warehouse, bio-logic, or large data systems, you need a lot of IOPS + Bandwidth + Low Latency + Memory + CPU. You can't take any of them out or your performance will suffer. I've been able to push as much as 7Gbytes/second from Oracle to all-flash storage. My fastest full-table-scan was 6 billion rows in 13 seconds.
Transactional systems can scale back on total bandwidth but they still need all of the other components mentioned in the previous paragraph. Ideally, you would use 10Ge for public networks and IB for storage and interconnects.
Just my thoughts... John
Solution 3
I've just had to deal with an IB SAN using Mellanox NICs. Works out of the box on RHEL
Solution 4
Do you need IB's latency benefits or are you just looking for some form of combination of networking and storage? if the former then you have no choice, IB is great but can be hard to manage, FC works great and is nice and fast but feels a bit 'old hat' sometimes, iSCSI can be a great solution if you consider all the implications. If I were you I'd go for FC storage over FCoE via Cisco Nexus LAN switches and a converged network adapter.
Solution 5
What about 10gb ethernet? The more exotic the interface, the harder time you're going to have finding drivers and chasing away bugs, and the more expensive everything is going to be.
Okay -- here is a cheap rundown given that everything's all within cx4 cable distances (15 meters):
(I'm using us dollars and list prices found on web pages. I'm assuming the vendor prices are USD are as well)
- Switch: $5222
- 10gig card with cx4 interface: $495 x12
- cx4 cables: $82 x6 + $165 x6
- Grand total: $12,700
Is infiniband that much cheaper?
(please note -- I've never actually used any of this gear, I'm only going by whatever pops up on google after 30 seconds of googling. I'm certainly not endorsing it or making recommendations that it will do anything good or bad)
Javier
Love to learn, unfortunately that leaves very little time to use what's learned :-)
Updated on September 18, 2022Comments
-
Javier almost 2 years
I'm contemplating the next restructuring of my medium-size storage. It's currently about 30TB, shared via AoE. My main options are:
- Keep as is. It can still grow for a while.
- Go iSCSI. currently it's a little slower, but there are more options
- Fibre Channel.
- InfiniBand.
Personally, I like the price/performance of InfiniBand host adapters, and most of the offerings at Supermicro (my preferred hardware brand) have IB as an option.
Linux has had IPoIB drivers for a while; but I don't know if there's a well-known usage for storage. Most comments about iSCSI over IB talk about iSER, and how it's not supported by some iSCSI stacks.
So, does anybody have some pointers about how to use IB for shared storage for Linux servers? Is there any initiator/target project out there? Can I simply use iSCSI over IPoIB?
-
chris almost 15 yearsAre all the hosts physically close to each other?
- Keep as is. It can still grow for a while.
-
Javier almost 15 yearswhat did you use for storage target?
-
Chopper3 almost 15 yearsIf you just want the bandwidth then IB is great, but most IB storage is actually just FC storage dressed as IB and iSCSI is slow for most things without 10GB and some form of QoS'ing.
-
David Corsalini almost 15 yearsCan't remember - too many systems come my way :)
-
Javier almost 15 yearsand what about software storage targets? IET works great for iSCSI, but don't know if IB would need anything else, or if it's simply a matter of getting IPoIB and iSCSI over that.
-
Travis over 13 yearsAlso qpid messaging server sees ping times of 20 us. Yes that is 20 us. Unreal.
-
Javier over 13 yearsso it's NFS over IPoIB. glad to hear that works so well, but i'm not interested on NFS, as i do like sharing block devices for this. What about iSCSI on IPoIB?
-
Javier over 12 yearsthanks for the pointer. do you (or anybody here) have first hand experience with those packages? what are the pros/cons of SRP relative to iSER? (besides windows compatibility, which is a total non-issue for me)
-
user251384 over 12 yearsAn advantage of iSER is that is possible to define multiple targets on one iSER server. An iSER initiator can choose which iSER targets to log in to. SRP on the other hand is a host-to-host protocol: all LUNs defined on the target become available to each initiator - unless LUN masking has been configured on the target. Another advantage of iSER is that it is possible to configure password-based authentication. And a big advantage of SRP is significantly lower latency - that is because the SRP target implementation runs in the kernel while the iSER implementation runs in user space.
-
Lucas Kauffman about 12 yearswhitespace... please
-
Chopper3 over 9 yearsI also love IB but do bear in mind that FCoE over DCB/CEE at 40Gbps exists and is surprisingly close to IB in many ways, plus it doesn't make most IT people get scared :)