HP D2700 enclosure and SSDs. Will any SSD work?

16,713

Solution 1

Well, I use a D2700 for ZFS storage and worked a bit to get LEDs and sesctl features to work on it. I also have SAS MPxIO multipath running well.

I've done quite a bit of SSD testing on ZFS and with this enclosure.

Here's the lowdown.

  • The D2700 is a perfectly-fine JBOD for ZFS.
  • You will want to have an HP Smart Array controller handy to update the enclosure firmware to the latest revision.
  • LSI controllers are recommended here. I use a pair of LSI 9205-8e for this.
  • I have a pile of HP drive caddies and have tested Intel, OCZ, OWC (sandforce), HP (Sandisk/Pliant), Pliant, STEC and Seagate SAS and SATA SSDs for ZFS use.
  • I would reserve the D2700 for dual-ported 6G disks, assuming you will use multipathing. If not, you're possibly taking a bandwidth hit due to the oversubscription of the SAS link to the host.
  • I tend to leave the SSDs meant for ZIL and L2arc inside of the storage head. Coupled with an LSI 9211-8i, it seems safer.
  • The Intel and Sandforce-based SATA SSDs were fine in the chassis. No temperature probe issues or anything.
  • The HP SAS SSDs (Sandisk/Pliant) require a deep queue that ZFS really can't take advantage of. They are not good pool or cache disks.
  • STEC is great with LSI controllers and ZFS... except for price... They are also incompatible with Smart Array P410 controllers. Weird. I have an open ticket with STEC for that.

Which controllers are you using? I probably have detailed data for the combination you have.

Solution 2

Any drive should "work" but you will need to carefully weigh the pros and cons of using unsupported components in a production system. Companies like Dell and HP can get away with demanding 300-400% profit margins on server drives because they have you over a barrel if you need warranty/contract support and they find unsupported hardware in your array. Are you prepared to be the final point of escalation when something goes wrong?

If you are already using ZFS, take a long look at the possibility of deploying SSDs as L2ARC and ZIL instead of as a separate zpool. Properly configured, this type of caching can deliver SSD-like performance on a spindle-based array, at a fraction of the cost of exclusively solid state storage.

Properly configured, a ZFS SAN built on an array of 2TB 7200rpm SAS drives with even the old Intel X25E drives for ZIL and X25M drives for L2ARC will run circles around name-brand proprietary SAN appliances.

Be sure that your ZIL device is SLC flash. It doesn't have to be big; a 20GB SLC drive like the Intel 313 series, which happens to be designed for use as cache, would work great. L2ARC can be MLC.

Any time you use MLC flash in an enterprise application, consider selecting a drive that will allow you to track wear percentage via SMART, such as the Intel 320 series. Note that these drives also have a 5-year warranty if you buy the retail box version, so think twice about buying the OEM version just to save five bucks. The warranty is void if you exceed the design write endurance, which is part of why we normally use these for L2ARC but not ZIL.

Solution 3

If it's not on the list of supported drives (configuration information, step 4), don't install it. It may or may not work, but it would be a fairly expensive experiment if it didn't work in such a way that something broke.

They have five SSD drives listed for this box, 2 SLC and three MLC. SLC last longer, but tend to be more expensive.

Solution 4

First, the enclosure firmware may (and surely will) notice non-HP-branded disks, but in fact it won't impact you too much. I doubt HP hardware will reject your drives (never seen that on HP ever before), so I'd give it a try.

But, when it comes to any updates (mainly, new enclosure firmware), HP will fix issues with their branded hardware, not with any no-name one.

Dispute the price, HP-labeled hardware is much robust (have seen several non-enterprise SSDs died after being loaded in enterprise environment - check if you want to pay for the extra risk, or at least ALWAYS backup), so it may worth to over-pay.

You may also want to consider FusionIO cards, as SATA bandwidth (not only disk-to-controller path, but also keep in mind controller-to-bus-to-CPU path) may impact you while PCI-E cards can be faster.

Share:
16,713

Related videos on Youtube

growse
Author by

growse

Updated on September 18, 2022

Comments

  • growse
    growse almost 2 years

    I've got an HP D2700 enclosure that I'm looking to shove some 2.5" SSD drives in. Looking at the prices of HP's SSD drives vs something like an Intel 710 and even something less 'enterprisey', there's quite a difference in price.

    I know the HP SSD's will obviously work, but I've heard rumours that buying an Intel/Crucial/whatever SATA SSD, bunging it in an HP 2.5" caddy and putting it in a D2700 won't work.

    Is there an enclosure / disk compatibility issue I should watch out for here?

    On the one hand, they're all just SATA devices, so the enclosure should treat them all the same. On the other, I'm not particularly well-versed in the various different SSD flavours to know whether there's a good technical reason why one type of drive would work, yet another one wouldn't. I can also imagine that HP are annoying enough to do firmware checks on any disks and have the controller reject those it doesn't like.

    For background, the D2700 already has 12x 300GB 10k SAS drives in it, and I was planning on getting 8x 500GB (or thereabouts) SSDs to create another zpool. Whole thing is connected to an HP X1600 running Solaris 11.

  • growse
    growse about 12 years
    I take your point, but I'd have a hard time believing that I can break a SATA/SAS host using a regular off-the-shelf SATA disk. That would indicate a broken host to me :(
  • growse
    growse about 12 years
    I'll take a look at FusionIO, thanks. My original idea was to use SSDs as a not-much-more-expensive-but-faster version of 10k 2.5" SAS drives. With HP pricing, I think that spindles come in at a much better price/performance point for my needs.
  • Alexander
    Alexander about 12 years
    By the way, you won't need zpool for performance
  • Alexander
    Alexander about 12 years
    You can simple add inexpensive SSD to your ZFS as cache - you'll see nice performance impact while won't risk your data.
  • growse
    growse about 12 years
    I'm going to get some spindles and one of the cheap SSDs and see if they (a) work and (b) are viable as ZFS cache devices.
  • Hecter
    Hecter about 12 years
    I think @Basil means to say that, if you buy thousands of dollars in SSDs and they subsequently turn out to be unreliable or they don't play well with the RAID controller, you're back to square one with a hit to your reputation and no way to un-spend the money. It is critically important to involve business decision makers in choices that involve saving money at the possible expense of operational reliability. If your boss is a cheapskate and he tells you not to buy what you need to make a system reliable, that's one thing. If you voluntarily design around cheap stuff that fails, you're fired.
  • SiXoS
    SiXoS about 12 years
    How is the i/o latency for synchronized writes for a SSD (as ZIL) in comparison with the ram of a BBU hw controller?
  • growse
    growse about 12 years
    Of course, I'm not suggesting just one SSD going forward, I meant just one to test compatibility. If it works and gives decent performance in testing, I'll up that number. For L2ARC, would I be better off with SLC?
  • growse
    growse about 12 years
    Agreed. It's about managing the risk/performance/budget triumvirate. I came into this question thinking that the cost/performance for SSDs was a lot better than it actually appears to be (cheap SSDs are worse than I thought, good SSDs are more expensive than I thought). Management wouldn't agree that the performance benefit of using lots of expensive SSDs as a zpool is worth the cost. However, adding caching is an easier sell.
  • growse
    growse about 12 years
    Thanks for the specifics on MLC/SLC - ignore my question above asking you the same thing :) Do the newer Intel MLC drives track wear level in their firmware, or does this require specific OS support? Need to read up on how well Solaris 11 plays with them. I also have two zpools, one 7200rpm SATA and one 10k 2.5" SAS, so I'll need to figure out which would benefit from caching most first.
  • growse
    growse about 12 years
    Controller I believe is a SmartArray P212 (will double-check) which is also potentially on the cards for an upgrade as well. I'm not using multipathing (at the moment), and I'm concious of the bus limits of the single SAS cable between the D2700 and the X1600. Would multipathing require another, separate controller, or could I up the bandwidth by upgrading to a single P812 (for example) - appreciate there's a redundancy argument here as well, but leave that aside for a moment....
  • ewwhite
    ewwhite about 12 years
    I use MLC for L2ARC. But at this point, I'll only use SAS SSDs. Maybe SATA SSDs for pure SSD zpool scenarios, but it's worth trying to use enterprise disks where you can.
  • Hecter
    Hecter about 12 years
    Of course the SATA zpool will benefit more from caching, but you also have the option of dividing your L2ARC and ZIL devices between the two arrays. If you buy a 20GB SLC SSD for your ZIL, then you format into two slices and assign them as 10GB ZIL devices for each zpool. Remember that RAID5 and RAIDZ1 are not a particularly good idea with large SATA drives; for vdevs made up of SATA drives 500GB and larger, I would suggest using mirrors or RAIDZ2.
  • ewwhite
    ewwhite about 12 years
    So you should redesign. The SA P212 is not a good ZFS controller. You'd be better of with an LSI SAS HBA for compatibility and performance reasons. You don't need multipath, but if you have a D2700 unit, it probably has two internal controllers. If so, multipath isn't difficult to achieve. For ZFS, basic SAS controllers are preferred. You will have problems with low-end SSDs and the HP controllers.
  • growse
    growse about 12 years
    Interesting - any specific suggestions? Going purely on internal / external connectors (An X1600 has 12 internal SATA bays) it looks like there's a few that might do the trick. The D2700 I assumed does have dual controllers as there's two ports on the back. Be good to chat with you at some point about your experiences with this kit, multipath and Solaris.
  • Hecter
    Hecter about 12 years
    @3molo Highly subjective. Which SSD? Which RAID controller? How much RAM in the ZFS host? The short answer is that ZIL on a separate physical device solves the synchronous write problem almost entirely, and that the I/O latency for synchronous writes ought to be very small, on the same order of magnitude as the write latency for sequential writes to the ZIL device itself.
  • ewwhite
    ewwhite about 12 years
    And that's why we test. There are certain solutions that work well. Others that simply don't. A pool of cheap SSDs is okay. Cheap SSDs in L2ARC or ZIL are bad. I tend to use PCIe ZIL and MLC SAS SSD for L2ARC. This is after breaking lots of lower-cost SATA units...
  • Alexander
    Alexander about 12 years
    By the way, it is unclear for me how you'll add SSDs to your enclosure without right brackets. Looks like it be better for you to install SSDs to your server, this way you don't need to care for enclosure controller and/or ports (if I recall it right, you'll find some free SATA ports there).
  • Basil
    Basil about 12 years
    If your box is under support (which you paid for), then there are no situations where it's worth installing anything that's not supported.