Our Blog

where we write about the things we love

18

Sep

Building the solution for Tech Ed Hands on Labs 2009

Over the past few years Intergen has sponsored the Hands on Labs at Tech Ed. And each year, a lot of work goes on behind the scenes to build a solution that meets the expectations of this tech-savvy audience.

Each year the labs are shipped on from the United States Tech Ed event, and each year the number of labs increase and so does their complexity and size. In 2007 our solution was comparatively simple: running a one-click install lab client on a laptop, and running the labs over a Windows file share. In 2008 we replaced the printed manuals we had been providing with a second screen and displayed the manuals on that.

We knew from last year’s feedback that this year’s solution needed to focus on improved performance of the labs. Server-based virtualisation was an obvious choice, but moving from essentially a client solution to a server solution presents a number of issues, not least needing some powerful server hardware. HP liked the idea and committed to providing appropriate hardware, which included:

  • 4 Terabytes of storage from a 36 drive array with 15k SAS presented to the hosts from an EVA6400 over 8G fibre channel (using two FC switches with MPIO)
  • A C7000 Blade enclosure with seven BL 490c Blade servers with 16 Quad core X5570 CPUs and 520GB of RAM
  • Two clustered Lefthand SANs with 2.4 TB of storage
  • 80 Thin Clients (HP5730W)

One of the most interesting aspects about preparing the solution is that we didn’t know for certain how many labs we would get from Microsoft until they arrived, if the labs would all use Hyper-V technology, or how many machines would make up each lab. What we did know: time frames are always tight and we had to get a number of technologies working together that we had not used before.

Understanding the anatomy of a lab and the requirements from machines in the lab were key aspects in working out how to provision the labs. Each lab would be built from a number of base read-only images with differencing disks, 4GB of RAM, and had to be isolated from the main LAN to prevent naming and IP conflicts.

Anatomy of a Lab

Due to this complexity we revisited the design of this year’s solution four times in as many weeks.

The first design was based around Citrix Lab Manager but for various reasons this became too complicated to implement in the time available. We also looked at using Microsoft System Center Virtual Machine Manager to manage the labs, but we determined this was too complex to implement and integrate with the Intergen Lab Manager presented to the users. We also looked into using PowerShell to control Hyper-V directly, however this presented some issues when creating multiple labs from clients at the same time while error handling was also a problem.

The final solution was to write our own Lab Manager provisioning service which was implemented in .NET 3.5 and was hosted as a Windows Service. Communication between the Lab Manager front-end and the Lab Manager service used Microsoft Windows Communication Foundation (WCF). WCF allowed us to keep the communication between the Lab Manager service and the front-end client minimal and it also acted as the security gate-keeper between the client and the servers themselves.

The Lab Manager service made WMI calls, using a custom-written C# wrapper on top of the Microsoft SDK for Hyper-V, direct to one of the seven Hyper-V hosts based on a configuration for each lab stored in SQL. The delegate-facing Lab Manager was rewritten to request labs from the lab service and launch VDI sessions.

This Lab Manager was written using Microsoft Windows Presentation Foundation which provided a snazzy looking user interface for the delegates. As a lab is selected, a request is sent to the lab service and an active Hyper-V host selected using a “round robin” algorithm. The service made a request to Hyper-V to create a private network and virtual machines. As well as the lab machines, a small open source router is started using a 1.4Mb floppy image and read only ISO. The lab machines are connected to the private network and the router connected to both.

To keep life simple we used Microsoft Access to manage the configuration in SQL.

One area that was important for us to achieve was to incorporate the ability to change the configuration of the labs dynamically without having to redeploy. This was achieved by updating the lab configuration in the SQL database. Every request made to the provisioning service interrogates the database again for the most up to date lab information and created new labs based on this new configuration. Using a similar approach, we were also able to control the availability of active Hyper-V hosts in case of failure. In fact, during the event, one of the Hyper-V server was taken off-line and rebuilt with no interruption to service!

Solution Architecture

The Windows 7 clients running on Citrix XenServer were built using Citrix XenDesktop and PVS. The Windows 7 machines were presented to the thin clients over ICA, with each client starting the Lab Manager at startup which subsequently updated the client from a network share.

In finishing, I would like to acknowledge huge amount of time and energy the team put into this.

A big thank you to everyone who helped, particularly the Hands on Lab proctors for their work at the and before the event, including many evenings. and to Stephanie O'Keefe for organising the team and the logistics around the event.

Lastly, I want to thank the core team: Dan Horwood, Duncan Smith and me, with support from Darren Wood (HP) and Gavin Bennet (Citrix), Kurt Mudford (Intergen) and Ben Yu (Intergen). This was a very challenging solution to build in just over seven days.

Posted by: Tim Epps | 18 September 2009

Tags: Hands on Labs, Hyper-V, Tech Ed, WCF


Related blog posts


Blog archive

Stay up to date with all insights from the Intergen blog