Our Blog

where we write about the things we love

12

Aug

Windows Azure auto scale: what you need to consider

One of the most newsworthy announcements from the Azure team at Build was auto scale. Cloud Services, Web Sites and Mobile Services now support automatically scaling up and down the number of hosts based on metrics such as CPU or queue length. This, in theory at least, has the potential to deliver lower costs to your application during quiet periods and near limitless capacity when demand increases.

Cloud

Sound too good to be true? It could be. If all your application needs to scale is more CPU or memory, then auto scale is a no-brainer; however, most modern cloud applications are composed of a variety of Azure services and third party dependencies and it's critical to make sure these scale along with your Cloud Service or Web Site. For example, a single Azure SQL Database is unlikely to have the capacity to serve hundreds of web servers; or, if your table/queue partitioning scheme isn't implemented correctly, then it will similarly become a bottleneck. Unless your application is designed to scale as a whole, auto scale may just deliver a fast web tier in exchange for an annihilated database!

Once you're confident in your application's ability to scale end-to-end, ask yourself: what happens if my database throttles due to a platform issue? If my third party email service has an outage? If I briefly lose connectivity between machines? Transient failures are a reality in the cloud and applications at scale need to be smart about handling them. It's much more than retry on failure. Consider, for a moment, that your database throttles your application. Hundreds of requests simply backing up in a retry loop has the potential to cause throttling to occur again as soon as the database comes back online, not to mention the impact of longer running requests on an already loaded web tier. It's the same principle as before: make sure the solution to one problem doesn't cause a new problem elsewhere in your application.

Chances are your application will surprise you at scale. So what's the solution? If you separated your interface from your implementation, you're halfway there. For example, a change in your ORM or data access technology shouldn't cause a cascading change throughout your application. If your application relies on a set of abstract functionality rather than a specific implementation, then there is likely a more scalable alternative that can be swapped in. Data access too slow? Look at in-memory caching, micro-ORMs for 'hot' operations or maybe table storage for non-relational data. Latency too high? Consider consolidating your n-tier architecture or opting for a low latency queuing solution. While you may not want to invest in designing and testing a highly scalable application, remembering this principle will ensure you have the flexibility to grow in the future.

Now, if your application currently sits on two hosts and you're thinking about auto scaling up to three, then most of this probably doesn't apply to you. But if you're designing your application for great success, then remember that auto scale isn't magic and isn't a substitute for cloud know-how. In either case, before you rush to enable auto scale, make sure you understand the performance characteristics of your application at every level of scale. And remember: there's nothing like production so be prepared for the unexpected!

 

Posted by: James Carpinter, Senior Developer, Enterprise Applications | 12 August 2013

Tags: Azure, Microsoft, Intergen, SQL, Build


Top Rated Posts

Blog archive

Stay up to date with all insights from the Intergen blog