Blade servers for vm?

6,562

Solution 1

Blade Centres are a way of achieving a high density of computer power in a small, well defined and contained unit. Other than their ability to reduce cabling mess, power drain, space requirements, etc. compared to traditional 1U servers, there's nothing special about them; each blade is a seperate server and you still need to plan how to provision, manage, etc. each blade (though some high end blade centres will come with tools to make this easier, too...).

They have potential disadvantages too; a blade centre with just a few blades might cost more than a number of equally configured 1U servers, they generally can't carry massive local storage on each blade, and network/comms might be an issue, both in terms of number of ports available and flexability of configuration.

Having said that, we use Dell's current Blade offerings for some of our ESXi servers and we've been very pleased with them, and I know plenty of other places that are using blades from other vendors equally happily.

Solution 2

Different vendors sell different types of Blades, so the answers depend on the type of blade system that you get. HP BladeSystem is one popular kind of blade.

is the whole enclosure seen as one big machine and yhr more blades you add the more computing power/resources you get in VMware - or are the just x number of seperate servers?

Each blade is it's own server. Each blade has it's own CPU, Memory, usually have their own harddrives and remote access, etc. The blades plug into a blade chassis, and the chassis handles power & cooling. Networking tends to be handled on both the blade and the chassis.

Remote access exists on each blade via something like IPMI or iLO ("Integrated Lights Out"), but the chassis tends to have a management interface which organizes the iLO interfaces.

The chassis handles interconnects, the switching layer, and perhaps does some routing. The power cables, network cables, keyboard/video/mouse, etc. are plugged into the chassis.

Think of blades like two dozen small servers inside of a single chassis.

VMWare can be installed just as it can on a traditional rackmount server. ESXi can be installed to the local disks, or can be loaded from a SD card or USB stick (disks are never touched), loaded from the network, etc.

Solution 3

The setup of using Blade Servers and VMs is great, dependent on your budget.

Especially when you combine that with SAN storage. The individual blades themselves would be the ESX hosts (Cluster them), and you would have shared SAN storage (iSCSI, virtual iSCSI, etc) that all the ESX hosts could store the VMs on. Then blade enclosure acts as centrialized management for all the physical blades and the uplink(s) into the physical network.

If your ESX hosts are clustered, you can take advantage of some HA features that will automatically migrate your VMs to a new ESX blade if you have issues with another. Additionally, when clustered, VMs can be distributed across ESX servers based of resources usage (automatically, or with policies). Another benefit of having a setup like this, is you can utilize SAN replication to backup your VMs.

Keep in mind these additional ESX features require vCenter and Licensing.

Solution 4

Well, a Blade CENTER is a cage for blades which are indicvidual servers. They share some hardare (power supplies, central routing, controller subsystem and KVM) but at the end every BLADE is a SERVER. The advanttage is central administration. The negative is the price for the center / chassis.

Whether it makes sense for you is - your decision. I tried to lik ethem, I realized they are stupidly expensive. SuperMicro has "Twi" cages with special mobos that are 2 servers HU and possibly cheaper (at least for me).

Share:
6,562
jason clark
Author by

jason clark

Updated on September 18, 2022

Comments

  • jason clark
    jason clark almost 2 years

    We're using VMware on a normal rack mounted server and considering getting another. I see these days I can pick blade enclosures and servers up for relatively low cost and see the term blade and vm mentioned quite a lot.

    So my question - how would a blade setup work with wmware - is the whole enclosure seen as one big machine and yhr more blades you add the more computing power/resources you get in VMware - or are the just x number of seperate servers?

    J

  • MDMarra
    MDMarra over 11 years
    "Stupidly expensive" is relative. For HP kit, when you factor in the interconnects you start breaking even around 5 blades.
  • pauska
    pauska over 11 years
    Sorry, I know your intentions are good, but every single feature you describe here is achieved with normal servers aswell. The only thing blades bring to the table are higher density of cores per rack, and simplified management/cabling. It does not give you any extra features with vSphere (or any other product for that matter).
  • pauska
    pauska over 11 years
    @MDMarra Dell told me their breaking point is 7 servers, so I guess it's vendor-dependent.
  • MDMarra
    MDMarra over 11 years
    Yeah it really depends. Plus, you can usually get a chassis or two for free if you make enough noise.
  • TomTom
    TomTom over 11 years
    On top, it also brings a SIGNIFICANT limitation in flexibility an a SIGNIFICANT cost compared to normal servers. And most densigties are chievable without blades these days.
  • TomTom
    TomTom over 11 years
    Seriously? I ran the number and I came out at a breaking for for INFINITE servers. Servers NOT cheaper than what I get from SuperMicro, the blade center 7500. That means it NEVER works.
  • MDMarra
    MDMarra over 11 years
    @TomTom I was comparing HP rackmount servers to HP blades w/ Flex10 interconnects vs FC HBAs and 10GbE cards. I don't have any price comparison data for HP blades vs Supermicro servers.
  • TomTom
    TomTom over 11 years
    I was using Dell servers on my end. ZERO sense. There are sometimes little marketking campaigns where you get the blade center for free when you take 2-3 not too cheap blades - I MAY jump in there next time, because if I get the blade center for free it MAY make sense, but otherwise it is RIDICULOUS.
  • Massimo
    Massimo over 11 years
    More power in less space = more cost. That's the same for everything. Just think of 2.5" disks as opposed to 3.5" ones.
  • TomTom
    TomTom over 11 years
    @Massimo Problems reading? I dont care about space, I was comparing the cost of 16 blades in a SuperMicro Setup with all other stuff (PDU, Switch) compared to the cost of 16 blades + Center from dell. Dell lost - higher cost per blade + a ridiculous expensive Center on top. Yes, 16 blades otherwise use more space (8u, compared to 7 the dell case takes) but when you go that dense, space is plenty (cooling per rack is limited). Blades are just n9otworth it - you loose a lot of flexibility (Network side) and every item more expensive than from other suppliers.
  • mfinni
    mfinni over 11 years
    TomTom - some people pay for space, like when their servers are in a colo. So, that's part of the cost. Maybe not for you, but you shouldn't treat people like idiots, particularly when they've demonstrated on this site that they're not.
  • TomTom
    TomTom over 11 years
    qmfinni - no, in this case it is irrelevant too. Really. Make the math - 1U (7 or 8) makes hardly any difference when you start counting in the powe cost. full Dell blade Center is 6kw load or something. Highest density rack in a data Center I found so far was 20kw - that is 3 of them, You can not fill up the Rack with Blades, gets too hot. The Moment you talk blade Centers, 1U for a Center is a non issue. Not when you pay around 10000USD MORE (!) for a blade Setup than for a comparable SuperMicro Setup. No, I am not joking. The ran the numbers. Now waiting for a "free blade Center" promo.
  • TomTom
    TomTom over 11 years
    It is. You have a LOT of additional costs when you heat more than 5-6kw per rack, and above 20kw it gets REALLY bad cost wise. That is - with a blade Setup - not even hallf a rack full of Equipment. For Dell, 19kw is around 21U - you basically have to leave the rest of the rack empty. Supermicro FatTwin Setups would use 24U for that density. A Little less open space. Otherwise look at the Price for higher than 20kw per rack power density Equipment and just realize this is EXPENSIVE. More than 20k per rack higher costs just for cooling Equipment.