Rack layout recommendations

13,008

Solution 1

UPS location
Bottom. Really. All that lead acid out-weighs a solid steel server any day. You want that in the bottom. Unlike servers, it'll get pulled out next to never. It'll provide stability in the bottom of the rack. Also by not putting all that weight at the top, your rack is less likely to act like a metronome in the case of some heavy rocking.

Switches vs. Patch Panels
Depends on your config, but for a 24U I'd lean towards switches. Or if possible, external patch-panels that then feed cables into your rack.

Cable Management Arms, or, what length of cable do you need
For 1U servers, I've stopped using the arms. I'll undress the back of the server if I need to pull it out (label your cables!). For 2U servers I have no firm opinion, but for larger server the arm makes sense.

Solution 2

  1. Heaviest things in the bottom. This is almost always your UPS. This also means that larger servers need to go in the bottom. This also makes it MUCH easier to install. Instead of having 3 guys hold it up while a 4th screws it in, usually it only takes one person to help you get the first couple screws and after that youre home free
  2. Dont use cable management arms - they block airflow. I was a big fan of them until someone pointed that out. Use velcro or zipties to secure your power cables to your power supplies so that things dont jiggle out or get knocked out if someone is working on another system.
  3. For a single rack (or even two) mount a switch central to the other devices, facing the rear. even if you have a 42U rack you wont fill up a single 48 port device unless you have ILO/DRAC/etc devices in everything, and then youre going to spend more rack space on the patching equipment anyway.
  4. longer cables with velcro will allow you enough distance to pull things out without disconnecting everything and still keep it on the neater side.

Solution 3

Our data centre has a raised floor, with the void used for cold air; we also run the cabling in traywork under the floor. (I consider traywork to be the oft-neglected OSI Layer 0).

Although I'd put UPS and heavy kit near the bottom, I usually leave the bottom 2-3U empty, to make it easier to pass cables up, and to allow cold air to rise in racks where kit doesn't blow front-to-back properly (e.g. side-to-side). Don't forget to use blanking plates though to keep that hot/cold aisle separation working properly if the rack has mesh doors.

In terms of cabling, if I'm expecting a lot of switchports to be used, I'd consider running a panel-to-plug loom direct to the switch card, with each of the 24/48-ports numbered individually and the bundle labelled clearly, too. Then you can present that as a patch panel somewhere central and reduce the number of joins along the cable path.

I prefer cable management bars to be used 1:1 with patch panels; I'd usually place 2x24 port panels together with one cable management bar above and one below. Patch panels at the top of the rack as they are light.

All my labelling is as clear as I can make it, as I may not be on-site during an incident at 2am and want to reduce chances of problems after a random vendor engineer swaps hardware out.

Solution 4

Ziptie every cable where it shouldn't move, both at the server and at the rack. Use Velcro to bundle the extra cable length needed to run the server when it's pulled out.

UPS goes at the bottom. So do battery packs. Heavier servers to the bottom. Fill panels to the top.

Given increasing server densities, I would add a switch to the rack. Depending on your requirements vlans or separate switches for the management connections might be appropriate. Color code your network segments.

Cable management arms tend to get in the way. Go with Velcro.

Cables should be as long as needed to run the server when it is pulled out. Cables longer than that become a problem. If necessary provide zones to run out the extra length. Somewhere near the power distribution panel for power cords, and near the switch for data cables. Wider cabinets with wiring channels in sides are also an option.

Share:
13,008

Related videos on Youtube

Sam Go
Author by

Sam Go

Updated on September 17, 2022

Comments

  • Sam Go
    Sam Go almost 2 years

    We have a 24U rack in our lab and I'm going to completely redesign it during the next weeks due to the scheduled big hardware upgrade.

    I have a couple of the questions about the layout of the server components:

    • UPSes. At this moment all UPSes (we use non-rackmount) are installed on the shelf near the top of the rack. Some people suggest to always put UPSes on the bottom but I'm just afraid what if the nearest server fall on them? I've never had a problem when server is falling down but never say never. One story is when it just fall on the rack stand, another story is when it crash an equipment.

    • Switches. Usually people suggest to put them on the top of the rack, but I can see the only practical reason doing this: when I open rackmount KVM, it makes the switch front panel inaccessible, that's good because it usually shouldn't make inaccessible any other equipment except a switch. On the other hand, all cables are going from the bottom and you need to stretch them thru the whole rack. If you change the cables frequently (in the lab/development setup you do), it could be a headache.

    • Patch panels - to use or not to use. And if use, how? Usually people connect all incoming cables to the patch panel then use panel RJ sockets to route them inside a rack, I agree it's useful for the large installations. But we actually have about 8 cables going in/out, why don't connect it directly to the switches?

    • Use a short cables just to connect an equipment or make them longer to allow getting a servers out without a disconnect? A first choice will never cause a cable hell in the future, but it doesn't allow to get a servers out without powering them off. Remember, this is for the development lab, but putting some of the equipment offline (i.e. SAN) may cause the whole lab to go down up to the hour.

    Probably enough for one question. Thanks for all answers.

  • GregD
    GregD almost 14 years
    +1 You do really want the UPS's at the bottom. I'm beginning to think all the cable management arms on my 1U servers aren't worth the hassle.
  • Antoine Benkemoun
    Antoine Benkemoun almost 14 years
    Putting a lot of weight at the top of the rack will make easier for it to tip over and injure you...
  • Sam Go
    Sam Go almost 14 years
    thanks for the answer. yes, "label everything" is probably the best advice I've got for a work with a hardware. Doing this even for the cables in stock, always labeling the type and the length. can you show any external patch panel? never seen them.
  • Deb
    Deb almost 14 years
    @disserman Bolt a short 2-post rack to the top of your 24U and mount your switches and patch panels there.
  • joeqwerty
    joeqwerty almost 14 years
    Personally I've never found that cable management arms make anything less messy or easier to manage so I don't use them either.
  • Aashraya Singal
    Aashraya Singal almost 14 years
    @joe the only benefit I've found is the ability to slide the server out while powered on. That said, it's still not risk-free, and the risks seem to outweigh the benefits by a good margin. There aren't that many instances where you need to slide out a server while powered on, either.
  • Mitch Miller
    Mitch Miller almost 14 years
    Ziptie cables are easy to use, but can be dangerous if over-tightened - they can kink fibres and Cat6.