Back to VMware

October 13, 2007

Over the past two months I have been working with Microsoft Virtual Server. When working on various numbers this year the licensing costs for VMware forced us to look at possible alternatives – one of these being Microsoft Virtual Server, with a possible move to Windows Virtualization when it is finally released.

Most of the experience with the Microsoft solution was very positive. Once the Virtual Additions were loaded in the Windows guest VMs I was able to get very good performance out of them. I ran a variety of servers on the host (which was running Windows Server 2003 R2 SP2 and the most recent Microsoft Virtual Server) and could run several VMs at once and still have good performance.

I was down to two final things to test when I finally ran into what we will consider a show stopper for any chance of us moving to the Microsoft solution. The first was to test a Linux guest VM and the second to test hot backups. Linux guests are now supported in the Microsoft solution and hot backups can supposedly be done through Microsoft Data Protection Manager using VSS. I never made it to testing hot backups, as the Linux experience proved to be the proverbial straw that broke the camel’s back.

I installed a CentOS 4.x guest. The install went with no issues and it was easy to get setup. Wanting to use a SCSI disk I managed to get the Linux Virtual Additions to install. I wasn’t very happy with this install though, mainly because they require you to use the –force switch to RPM to get the X portion of the extensions to install. In my opinion if you ever have to use the –force switch with RPM, something is wrong.

To get the SCSI portion to start at boot I needed to integrate it into initrd. This was not too bad and soon I had the vmadd-SCSI starting with no errors. Part of my testing was to test IO performance. I did this by running iostat in the background and using dd to create a 2GB file on the virtual disk. My first time doing this caused the guest to get numerous disk errors and render the Linux VM unusable. My other running Windows VMs did still run and respond to requests. However, when powering off the Linux VM it hung on shutdown. The only way I could find to power it off was to restart the host server. Luckily this was a test environment, but had this occurred in production it would have been unacceptable.

I tried more that afternooon to get the Linux VM to fail after I brought the host server backup. I could not get the Linux VM to fail though. When I came in the next morning though the Linux guest had died again with the same errors as seen the previous day. It had died at 8pm the previous night – the only thing going on at that time was the iostat command I left running refreshing every 2 seconds. Again, it required a reboot of the Virtual Server host to get the Linux guest to come back up.

Now, it is possible the actual issue within the guest could have been fixed. But, I had seen enough. There is apparently some condition that exists that can crash a guest VM bad enough that the whole server needs restarted. I cannot risk going into production when such conditions are found in testing.

I tested VMware ESX Server similarly when we considered it for production. I was never able to crash a guest VM to the point that the entire ESX server needed rebooted to resolve the issue. Powering off the guest always worked. Perhaps I just haven’t run into the issue with ESX and there is some condition out there that will force me to reboot a production ESX box to fix one broken guest – but with the Microsoft solution I have actually confronted this issue in testing.

So we are back on track to continue our VMware ESX implementation. It feels good to move forward again. It also helps that VMware’s new Foundation packs coming in December will really help smaller businesses like the one I work for afford the ESX products.

Advertisements