This article is more than 1 year old

VMware piles up next virtual stack for servers

Lots of new code coming in 2009

While VMware still has the lion's share of the money and installed base in the server virtualization software racket on x64 platforms, 2009 is shaping up to be a year when various contenders ratchet up the pressure on the company and try to steal away some business. But VMware has plenty of its own smart techies, and a marketing machine that can - and will - compete against the likes of Microsoft, Citrix Systems, Red Hat, and others.

The hypervisor, which is the key piece of software that allows for the virtualization of a server or a PC so it can support multiple, concurrent operating systems, may have been rapidly commoditized in the past several years, with the consequent drop in prices for hypervisors. This is why vendors have tried to shift their development and marketing focus to the add-ons that help manage virtual machines and make networks of VMs more resilient. Still, none of this means that hypervisors do not need to be improved and tweaked. They do, particularly as the underlying hardware - which still matters, after all - keeps changing underneath the hypervisors.

Right now, VMware has two variants of its core hypervisor, ESX Server 3.5 (which weighs in at around 2 GB and which includes an embedded service console) and ESX Server 3i (the embedded version of the hypervisor that removes this service console and assumes this work is done by an external management mechanism like VirtualCenter). It is reasonable to expect that eventually VMware will deliver only one hypervisor - the skinny embedded one - and not support two versions, but thus far VMware has been mum on this.

But VMware is willing to give some hints about what is coming down the pike in 2009.

According to Bogomil Balkansky, senior director of product marketing at VMware, the naming convention for the future ESX Server (or ESX Servers, as the case may be) has not been set, even if people are referring to this future product as ESX Server 4.0. He says that there will be some product rebranding in 2009. The VirtualCenter management tool for ESX Server hypervisors will become vCenter; software that rides atop the hypervisor will be referred to as vServices, and virtual file systems and other storage-related code will be called vStorage. This probably means the hypervisor itself will be called vServer 4i, but Balkansky did not say that.

The current ESX Server hypervisors support up to 64 GB of main memory to be allocated to a single virtual machine on a server, and the VirtualSMP feature of the hypervisor (which allows a VM to span multiple cores or processor sockets) can only span up to four x64 cores right now. Balkansky says that in 2009, the hypervisor will double VirtualSMP capability to eight cores and with quadruple memory to 256 GB maximum per virtual machine. By the way, that is cores, not threads. While the ESX Server can certainly see and use multiple threads in a processor core, if they are there - some Intel chips have virtualized threads created by the chip hardware to boost their efficiency, while Advanced Micro Devices' Opterons do not have simultaneous multithreading (which is silly, really) - these threads are not seen by the hypervisor as a core even if applications riding atop the operating system treat threads as if they were separate cores.

Balkansky is expecting that virtualization-driven server consolidation will accelerate in 2009, and for an interesting reason: to boost performance on a single piece of iron. "A lot of applications running on servers today cannot take advantage of the extra cores chip makers are delivering," Balkansky explains. "They end up wasting the extra horsepower."

But such monolithic applications can be run side-by-side on a single, big box and get a lot more work done. As an example, Balkansky cites a benchmark test VMware ran with IBM on some x64 servers. Eight copies of Exchange running on eight x64 servers supported a total of 8,000 mailboxes in the test, but consolidating the servers onto a single server using ESX Server not only eliminated seven footprints, but allowed those eight instances of Exchange to be able to support 16,000 mailboxes with decent throughout and response time.

"This is a harbinger of things to come. A lot of applications will behave a lot worse than Exchange when it comes to harnessing multicore servers. It is my guess that it will take more than a decade before every application will be written to really take advantage of cores," says Balkansky.

In the 2009 release, ESX Server will also get the capability to add virtual memory or CPUs to a VM instance without having to shut down and reboot the VM. Up until now, if you tweaked a VM's underlying virtual hardware, you had to restart the VM, which meant shutting down its applications for a short time. Virtual machines can't escape problems of real machines so easily, it would seem; hot-add memory and hot-add CPU features are still missing from x64 servers and only recently have been added to RISC, Itanium, and proprietary platforms.

Next year's hypervisor update from VMware will also include a less-virtual feature called VM Direct Path, which is an I/O passthrough that allows a virtual machine to be tired directly to a physical I/O device, such as a disk controller or network feature card.

VMware wants to do this for two reasons, according to Balkansky. For one thing, by going physical with the link to an I/O device, the performance gets closer to native speeds. Moreover, if you have a peripheral that is not supported by the virtual I/O created inside ESX Server, you can use VM Direct Path to support that peripheral. You don't have to wait for a driver from VMware or the peripheral supplier to come along. The only issue with VM Direct Path is that supporting peripherals in this manner sacrifices their virtual mobility. So VMotion, which allows a VM to teleport around a network of servers using shared storage, and DRS, the distributed resource scheduling software in the VMware Infrastructure stack, won't work on VMs that use the VM Direct Path feature.

Balkansky says that VMware is also cooking up its own fault tolerant VM capability, not code imported from partners Stratus Technologies or SteelEye Technology. While not divulging a lot of details, Balkansky says the code will keep two virtual machines running on two distinct physical servers in lock-step, with one VM active and the other operating in passive mode until a failure. This code was demonstrated at VMworld last summer, in fact.

On the storage front, the vNetwork Distributed Switch, a software switch that interfaces the network links to VMs that VMware created in conjunction with Cisco Systems, will be delivered as a virtual appliance running atop ESX Server (or whatever the future product is called). Cisco will also apparently sell a version of the product, too, called the Cisco Nexus 1000V.

The vStorage APIs are going to be opened up so storage administration tools, which think in terms of LUNs and arrays, can see where VMs and their affiliated VMDK files are located on the disk arrays. Right now, VirtualCenter can't easily tell you where the VMDKs are physically located on the storage, which means storage admins can't easily figure out what is running where before they tweak an array in some way.

VMware is also cooking up a variant of thin provisioning for VMDKs that will allow a virtual machine to overcommit its storage capacity, much as VMware has done for years with memory capacity. Such overcommitment may make some people jumpy, but allocating memory and disk to a VM and then only using a fraction of it is inefficient. As long as the allocated capacity can be made available when a VM needs it, no harm done. ®

More about

TIP US OFF

Send us news


Other stories you might like