Running 100 virtual machines on a single VMWare host server
Solution 1
Yes you can. Even for some Windows 2003 workloads as little as 384MiB suffices, so 512MiB is a pretty good estimation, be it a little high. RAM should not be a problem, neither should CPU.
A 100 VMs is a bit steep, but it is doable, especially if the VMs are not going to be very busy. We easily run 60 servers (Windows 2003 and RHEL) on a single ESX server.
Assuming you are talking about VMware ESX, you should also know that is able to overcommit memory. VMs hardly ever use their full appointed memory ration, so ESX can commit more than the available amount of RAM to VMs and run more VMs than it actually 'officially' has RAM for.
Most likely your bottlenech will not be CPU or RAM, but IO. VMware boasts huge amounts of IOPS in their marketing, but when push comes to shove, SCSI reservation conflicts and limited bandwidth will stop you dead way before you'll come close to the IOPS VMware brags about.
Anyway, we are not experiencing the 20 VM performance degradation. What version of ESX are you using?
Solution 2
One major problem with a large environment like that would be disaster prevention and data protection. If the server dies, then 100 VMs die with it.
You need to plan for some sort of failover of the VMs, and to plan for some sort of "extra-VM" management that will protect your VMs in case of failure. Of course, this sort of redundancy means increased cost - which is probably why many times such an outlay is not approved until after its benefits have been seen in practice (by its absence).
Remember, too, that the VM host is only one of several single point-of-failures:
- Network - what if the VM host's networking card goes down?
- Memory - what if a chunk of the VM host's memory goes bad?
- CPU - if a CPU core dies, then what happens to the VMs?
- Power - is there only one - or two - power cables?
- Management port - suppose you can't get to the VM's host management?
This is just a few: a massive VM infrastructure requires careful attention to prevention of data loss and prevention of VM loss.
Solution 3
No statement on the viability of this in production, but there is a very interesting NetApp demo where they provision 5440 XP desktops on 32 ESX hosts (that's 170 per host) in about 30 minutes using very little disk space due to deduplication against the common VM images
http://www.youtube.com/watch?v=ekoiJX8ye38
My guess is your limitations are coming from the disk subsystem. You seem to have accounted for the memory and CPU usage accordingly.
Solution 4
Never done it - but I promise you'll spend much more than on storage to get enough IOPs to support that many VM's than you will on the server hardware. You'll need alot IOPs if all 100 of those are active at the same time. Not to sound negative but have you also considered you're putting a lot of eggs in one basket(sounds like you're after single server solution?)
Solution 5
If you're going to do that then I'd strongly urge you to use the new Intel 'Nehalem' Xeon 55xx series processors - they're designed to run VMs and their extra memory bandwidth will help enormously too. Oh and if you can use more, smaller disks than few, big ones - that'll help a lot. If you can use ESX v4 over 3.5U4 too.
Related videos on Youtube
user9517
Updated on September 17, 2022Comments
-
user9517 over 1 year
I've been using VMWare for many years, running dozens of production servers with very few issues. But I never tried hosting more than 20 VMs on a single physical host. Here is the idea:
- A stripped down version of Windows XP can live with 512MB of RAM and 4GB disk space.
- $5,000 gets me an 8-core server class machine with 64GB of RAM and four SAS mirrors.
- Since 100 above mentioned VMs fit into this server, my hardware cost is only $50 per VM which is super nice (cheaper than renting VMs at GoDaddy or any other hosting shops).
I'd like to see if anybody is able to achieve this kind of scalability with VMWare? I've done a few tests and bumped into a weird issue. The VM performance starts degrading dramatically once you start up 20 VMs. At the same time, the host server does not show any resource bottlenecks (the disks are 99% idle, CPU utlization is under 15% and there is plenty of free RAM).
I'll appreciate if you can share your success stories around scaling VMWare or any other virtualization technology!
-
Govindarajulu almost 15 yearsWhat VMware product are you planning on using? ESX? ESXi? Server?
-
warren almost 15 yearswhere are you buying your servers from? I want one :)
-
Taras Chuhay over 14 years5000 USD only, can you sell me two? :)
-
Olivier Dulac over 8 yearsYou have "this amount of cpu" in your hosting server, and each VMs will get a share of it. Plus esxi will have overhead : "switch to this VM, manage it, switch to the next, etc", many times per second. It means each VM will get only a fraction of the total cpu. The more VMs, the more you divide your cpu (and the more overhead you also add, which means instead of having 100 vms, you in fact have quite a bit more).
-
Cracoras almost 15 yearsThanks Wzzrd! I am currently using VMWare Server 2.0, but planning to try ESX very soon. I've been watching I/O on all host arrays very carefully, and the only way I was able to max it out is by rebooting multiple guests at a time. When the guests are doing light workload or staying idle, the host disks are 99% idle. So, I am suspecting that something else than CPU and IO is causing all the VMs to slow down. By the way, they slow down dramatically - it takes 20 seconds to open the Start menu, and if I run Task Manager inside a VM, the task manager takes 90% CPU - weird!
-
Cracoras almost 15 yearsI would definitely create multiple "baskets" and set up some automated backups. I/O bottlenecks can be easily solved with SSD drives these days. I've been using 160GB Intel MLC drives on production and they are spectacular. You basically get 5 times better random I/O performance than top of the line SAS drives (in simple RAID configurations).
-
Cracoras almost 15 yearsMy VMs stay idle most of the times. People connect maybe a few times a day to run some lightweight software. I have confirmed that these VMs create very small CPU overhead on the host when they are idle (20 VMs add up to 9% CPU utilization based on dual quadcore system). Would you be able to remember how the four VM per CPU limit is justified? Are they thinking about web servers or desktop OS instances?
-
Govindarajulu almost 15 yearsThat would because you are using VMware Server. VMware Server is a virtualization platform on top of another platform (Linux, most often), while ESX is a bare metal virtualization platform. Very different, both in concept as in the way it performs.
-
Jason Pearce about 13 yearsListen to David. You will want an N+1 configuration, meaning you need at least one spare idle machine that is capable of absorbing all of the workload another machine should it fail. My recommendation is a two-server cluster that distributes the load evenly but could independently handle all workload should one machine fail.
-
Govindarajulu almost 12 years@dresende. No, it isn't. Trust me.
-
dresende almost 12 yearsI'll trust if you explain the host.system.kernel.kmanaged.LinuxTaskMemPool here: d.pr/i/q4vG