Azure Scalability Guide. How to Smartly Scale Your Solutions

2012-05-07 by Marina Astapchik

There once comes a time when every well-growing project starts to require more resources than a usual desktop server could produce. This is the right time to think about scalability.

Before reading the article, we would like to show you two price and performance dependency charts for different hosting solutions related to the quantity of users of the project.

In this article we will talk about the Public Cloud related to Microsoft Azure, as a most price/performance balanced solution for all types of projects. But before choosing it, it's worth reviewing which hosting solution will best fit your project needs.

So let's go back to scaling.

There are two techniques of scaling the hardware resources — scale-up and scale-out.

  • Scale-up is a technique for the hardware upgrade of the current server or moving to a stronger one. For example, you could change a usual desktop PC to the Professional Server solution with 8 CPUs. This technique is preferable for not very big projects.
  • Scale-out is a technique of splitting the resources to several server instances. To split the requests queue to all the instances in the network (cloud), all huge projects, like Facebook or Google, use scale-up techniques to process all the requests to their services.

Scaling up has a lot of positive sides for big projects. First of all, the size of the cloud is not limited. You can start from only a few units and then add more and more. But remember! You should foresee this situation on the first architecture phase of the project and design a multithread architecture, so the requests could be easily balanced between hardware instances. Because if you try to move to the cloud application that uses only one core of your CPU, this will give you no performance benefits at all. The other positive side of scale out is money. A 64 CPU server is much more expansive then 64 units with one CPU.

Scaling out also has different ways of implementation. You could split the tasks between the instances according to the roles or you could group the roles and set up several units with the same responsibilities to achieve a mirroring effect (see the picture).

In Windows Azure Cloud Solution Virtual Machine (VM) there is a unit of measurement. VMs could be of different sizes as as you can see in the table below.

You could easily change the VM size in the service definition configuration file (*.csdef) by setting the vmsize attribute.

Before setting the VM size for production, it is useful to test different configurations. It's no use setting Extra Large instances for all the task types. Different tasks also require different resources. Tasks like video encoding require a high CPU load, but tasks like Database Server require more bandwidth.

Caching is a good pattern to reduce the resource requirements and, as a result, to save your money. Cache implementation is a good way to minimize the traffic, the quantity of storage transactions and database requests. It could prevent loading of static images used in UI or the RSS data collection. These are two more regular examples of unneeded load generation that depend on the client activity.

There are two different ways of caching implementation available for Windows Azure:

  • Client-side caching;
  • Static content generation.

Client-side caching is a technique of caching that uses the client-side browser. There are also two ways of implementation:

  • Using E-tags (special meta tags in the page header). This method is natively supported by Azure Storage.
  • Cache-control is a good possibility to store user content without update up to 30 days. It is good for big static files.

Static content generation could be done for full pages, CSS files or any generated resource files. Dedicated workers could be assigned for the content generation and upload to the blob storage. In this case static content will be generated only once, what will save a lot of money on transactions to Azure Storage and bandwidth.

Elastic scale-out is required when you don't know how much resources you need, or the load is changing according to the predictable rules (for example, day and night load on the news portal, or the higher load on the online store portal during the sale). You can see general load patterns on the graphs below.

You can deal with the variable load in two different ways, depending on your budget and needs:

  • Maintaining excess capacity when you work with maximally needed hardware resources; and tear down only some of them when they are not required. In this case you could almost immediately remove resources, but you won't save any money, because you will need to pay even for the turned off instances.
  • Turning off/on instances will allow you to save money, but it will require time for just added instances to get ready.

You could also set up rule based scaling when you use metrics to predict the server load, or use the time table for managing hardware resources in the automated mode.

Metrics used for the calculation of the server load could be organized in three groups:

  • Primary metrics: requests per second, queue messages processed.
  • Secondary metrics: CPU utilization, request queue length, server response time.
  • Derivative metrics: rate of change of the queue length and historical load statistics for prediction.

You can collect statistics from the .Net code using Microsoft Azure Diagnostics namespace tools, or use other statistic sources like Event Log, IIS Logs, Infrastructure Diagnostic Logs or built-in performance counters. And always think about the price of measuring. Measuring everything may spoil your performance achievements.

So before you start thinking about your business needs, take into account the following:

  • Do requests take too much time for processing?
  • Could the performance problems be resolved by the project refactoring or it really need scaling?
  • How much money could you spend on scaling?

And only after that you could start implementing automated scaling using the Management API or by changing the instances size in the service configuration files (*.csdef). You could also create a mechanism of performance problems notification or make a rule set for scheduled scaling.

And remember. It's better to use all the code related possibilities, like caching or code optimization, before adding the new instances to your system. Only smart scaling will increase your project performance, and every cent you spend on it will give you a good result.

Industries and Technology Areas:

Industries: cloud computing, Microsoft Azure, software development

Technology Areas: Microsoft Azure, scaling techniques, caching, .Net, software development

RELATED ARTICLES

Top Questions and Answers about CRM

Growing businesses are looking for a better way to manage customer relations. Understanding that the idea to store their information in note cards or Google documents is a bit old-fashioned, they are wondering, what is CRM, how it works and how it can help their businesses. These questions are hardly...
READ ARTICLE

QR Code Solution for Mobile Devices

Business Needs High level of competition in business creates needs to provide as much services as possible for less money. Companies try to inform clients as much as possible about the opportunities and competitive advantages. Large spaces of printed surfaces are used to show the content marketing information. The tendency…

READ ARTICLE
Node.js: pros and cons

Bringing Node.js into your project: pros and cons

Introduction “Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.” This is what official Node.js website tells us, and this...
READ ARTICLE
Database Migration Best Practice

How to migrate high-load tables with zero downtime using background jobs and SQL views

Sometimes database migration may become a pain. The tables are large, the load is high, HDD space is expensive and the downtime should be as short as possible. Elinext team has recently investigated the issue within one of our Ruby on Rails projects and managed to offer a flexible data…

READ ARTICLE
WHO WE ARE

ABOUT ELINEXT

Elinext is a custom software development and consulting company focusing on web, mobile, desktop and embedded software development, QA and testing. Since 1997, we have been bringing digital transformation to mid-sized and large enterprises in Banking and Finance, Insurance, Telecommunications, Healthcare and Retail. Our key domains include enterprise software, e-commerce, BI and Big Data, e-learning and IoT.