how many blade enclosures can you fit in a rack?

8,466

Solution 1

10U rack chassis, 42U of space, you can fit 4 of them in a rack with 2U to spare.

Is this is a good idea? That depends a lot on your infrastructure. If you're using passthrough ports instead of integrated switches that could be up 512 cables that'll need to be run to another rack for connecting to things, as well as the power to drive it all. Using the integrated switches makes it easier, though getting power to it will still take some creative routing.

Solution 2

I have some 50U rack with 5 x HP C7ks in them, so long as you can power and cool them there's no issue.

Solution 3

We have 12 C7000's fully loaded with BL465 G6 servers and using virtual connect modules installed in 3 42u cabinets located in a colo facility. We have 2 50Amp 3-Phase power circuits going to each rack. Each circuit connects to a HP 0u PDU. The PDUs then feed power to the enclosures. We were lucky that this was a new installation and we didn't have existing racks which needed to be utilized. This allowed us to order the enclosures pre-racked and cabled to the PDU in HP cabinets. We got HP wide racks with cable channels on each side. This has worked out great because the enclosures otherwise leave little space for cabling in the cabinet. With multiple enclosures connecting to redundant ethernet and fiberchannel switches the cabling adds up fast.

The colo facility cooling is pretty traditional underfloor cooling with alternating cold and warm aisles. We have 4 rows of 5 racks in our cage. We also have a 3PAR T800 SAN which throws out a good amount of heat so we had to work closely with the colo facility regarding placement of the equipment to optimize cooling. Basically we have a C7000 rack in 3 of the 4 rows and the 3PAR in the 4th

Share:
8,466

Related videos on Youtube

William Lee
Author by

William Lee

Updated on September 18, 2022

Comments

  • William Lee
    William Lee almost 2 years

    Considering the power and cooling requirements of HP's or IBM's newest blade (10U) chassis fully stacked, how many of them can you fit in a standard size rack (42 U) ?

    • Admin
      Admin about 13 years
      Let's see - 42/10 = 4, with a couple of gaps for cooling if required.
  • William Lee
    William Lee about 13 years
    thanks a lot for your answer, to be honest my main concern was the cooling part, I don't have a lot of experience with how much heat these blades beasts create. Regarding the cabling we're going with virtual connect from HP so I'm hoping things will be pretty straightforward. I'm not sure I understand why powering them up would be so difficult, can you please shed some light on that, is it that different than trying to power up 20x2U servers sitting in the same rack ?
  • William Lee
    William Lee about 13 years
    thanks a lot for your answer, one thing though can you please elaborate how exactly do you power and cool them efficiently ?
  • Chopper3
    Chopper3 about 13 years
    Power is easy, each enclosure has separate IEC C-19 cables from the room PDU, on the same phase but on different spurs. Cooling in this case is via semi-sealed hot air extraction from the rear rather than via front-facing chillers. The hard bit is how to get the top two enclosures into the racks, we used hydraulic scissor- jacks but I would have hated to have had them man-handled into place.
  • Chopper3
    Chopper3 about 13 years
    Virtual Connect or FlexFabric? I like and trust the former but have fallen in love with the latter recently :)
  • MrGigu
    MrGigu about 13 years
    @Jimmy - powering that many blades can be difficult. We've done a rack full of Dell M1000e's and each one has six power supplies. That's the same number you'd have to power if you had 2U servers. We had to run 5kva of power into the rack (which you can't always get, depending on the capacity of the datacentre) and as each power cable is very fat, you need a lot of big velcro ties, as all 6 of those power cables are terminating in a fairly small space (when you're using 2U servers it's all spread out).
  • Rob Moir
    Rob Moir about 13 years
    Fully populated Blade server chassis generate a fair amount of heat and use a considerable amount of power in the art of doing so. Once you get into the realms of an entire rack full of them then making sure you have both areas covered takes a bit of thought -- HP & IBM can tell you how much power each chassis requires and how much heat it will generate and from there both power and cooling requirements for each rack are easy to work out.
  • Rob Moir
    Rob Moir about 13 years
    ...This isn't what I would call a problem as such, just an lot of detail that you have to work out beforehand. You can't pull the requirements for this level of equipment out of your ear and hope for the best. Speaking of ears, if you have several racks populated like this then you may also want to consider ear defenders. No I'm not kidding, not entirely anyway.
  • Deb
    Deb about 13 years
    Ear defenders are a must with those things. They're LOUD.
  • MrGigu
    MrGigu about 13 years
    we didn't have such luxury. Without anything in them ours weighed about 30kg. Thankfully my brothers in law are all over 6' 2" and work as labourers, so they made pretty light work of it ;)
  • learnningprogramming
    learnningprogramming about 13 years
    You really need to have this discussion with whomever runs your data center to determine if one rack can handle the necessary power and cooling for a full rack's worth of blade enclosures. I've had several different colocation facilities inside we only put 2 or 3 blade enclosures in a rack because they either weren't built to support that much cooling density or that much power density. This will vary depending on the age of the facility and how it is set up.