Our Blog

where we write about the things we love



Virtualisation - a hot topic in IT

We see it gaining momentum in the hosting and server environments and also as a useful mechanism for shipping enterprise demos and beta software. The value of having multiple server platforms running on a single machine has many advantages, most specifically hardware utilisation and management and the associated cost savings, and, more recently, the pressure of being green.

At Intergen over the past three to four years we have been using an array of virtual technologies across the organisation and realising the benefits that this provides. This blog post discusses the more recent focus of using virtual technologies to conduct development locally on each developer’s computer, utilising Microsoft Virtual PC (VPC) and Virtual Machines (VMs).

For a while now we have been using Microsoft Virtual Server to host enterprise server products on a single server. This has been highly successful but suffers when a large number of VMs over a number of projects are run concurrently. Until recently, the performance of our developer desktop machines largely prevented the development group from utilising virtual environments locally for development. Over the last year though we have seen the price of high specification desktops decrease whereby the price/performance and optimisations now enable developers to host and develop within VMs locally.

Last year we decided to run a couple of enterprise scale projects using VPC to determine the pros and cons. These projects included many key Microsoft building blocks such as Active Directory, Exchange, IIS, SharePoint and CRM, to name a few, and the development tools supporting this – Visual Studio and Team Foundation Server. I thought I would present our experiences and observations in the form of pros and cons.

  • Portability: As a VM is predominately contained within a single file, moving a full VM is as simple as copying the file.
  • Versioning: VMs are easily backed up and versioning of releases and configuration is a simple process of backing up VMs regularly and cataloging them.
  • Hardware abstraction: VMs are abstracted from the underlying hardware platform and therefore easy to share across different desktop makes and models.
  • Used as an effective testing platform: The overhead of building a full UAT environment, configuring it and loading the necessary components for each release is significantly reduced. As a solution reaches stabilisation, the lead developer builds the latest version, copies the VM to the test environment and the testing commences – as each release is a separate VM, management of release versions is a simple exercise. Another benefit of using VMs for testing is the ability to save a VM partially through a test execution, and being able to restore to this state for subsequent testing iterations, saving a lot of time. Similarly, rolling back data, files and configuration was always a headache for the testing team – with VMs, a new base VM can be reinstated and started within minutes.
  • Training: As is the case with the test team, once the training environment is configured and saved, setting up a clean training environment is a simple task of copying the base training VM – no rolling back of changes to prepare for the next session.
  • Clean development environments: Developers no longer need to manage a single development environment for the many different client systems developed and supported.
  • Isolated environments: Beta software and patches/service packs can be installed and tested on separate VMs with the confidence that the install will have no effect on the existing projects.
  • Simplicity of PC configuration: Developer PCs now only need the base O/S and a few business applications such as Microsoft Office. Amongst other benefits is the performance gain from running only those components necessary to do the job, not the superset to cover a range of project requirements.
  • Adding additional team members or rebuilding a developer’s PC: Adding an additional developer to a team is as easy as copying over a VM. In the past, adding additional developers to a project team added significant overhead to a project. In the same respect, rebuilding a developer’s computer now requires only reimaging the PC and copying over the VMs – perhaps two to three hours - a significant reduction in downtime.
  • Support: As a consultancy, Intergen is required to deploy and support many projects per year. Supporting this scale of projects is a challenge. VMs enable us to deploy the environment directly to the support team whereby on a support request, the support person locates the VM, loads it on their machine and immediately have a fully working environment.
  • Impact on the network infrastructure: VMs are not small in size and commonly reach 8-10GB (and more). Our network required upgrading to gigabit to improve performance of copying VMs around, and disk environments needed upgrading to support the increasing demands caused by versioning of VMs.
  • Impact on the development machines: Many of our developer machines required upgrading to be able to run effectively the full range of server products and the overhead of running VMs. Luckily, high performing hardware is now becoming affordable with dual/quad core CPUs, cheap RAM (4GB), affordable large disks and new CPU instruction sets optimised for VMs. Still, a not so insignificant reinvestment if a large number of developer’s machines are not of a standard to run the full product environment. Another option if the project permits is the mix of developing against local VMs alongside Virtual Server VMs – lessening the impact on the developers PCs.
  • Patches: As each VM is a totally separate instance of an O/S and its software components, patches need to be applied to each VM.  This does provide a greater level of control, but with a large quantity of VM’s this can be time consuming.
  • File corruption: As each VM file contains the entire file system, any corruption of the VM file will corrupt the entire VM. Backup regularly and build fault tolerance into your disk subsystem!

From the observations above, you can see there are a number of considerations that need to be thought through. The greatest impact for us was the developer hardware and the LAN performance – we underestimated this and endured lost productivity until the machines and network were upgraded. Since moving over to using virtualised environments, I don’t see us reverting to developing in a non-virtualised way. The impact on the team is significant in terms of gained productivity and efficiencies – something we ultimately pass directly onto our clients and that must be a good thing!

Posted by: Tim Mole | 23 January 2008

Tags: Microsoft Virtual Server, Virtualisation

Blog archive

Stay up to date with all insights from the Intergen blog