IMSA MOBILE APPS – 3 – SCALING WITH AZURE

When taking your presence to mobile, there is always a scalability conversation that quickly occurs. This is especially true when the systems you need to access are on-premise. Your on-premise systems may have never been designed for the user load you would add with mobile apps. Additionally, your on-premise systems may not even be exposed to the internet, introducing a whole set of security complexities that need to be solved. In the case of IMSA, we are relying on services already exposed to the internet, so one less set of issues to manage.

Through our build experiences with Azure projects such as CNN, we knew several considerations would apply. The services referenced below are those supplied directly from IMSA:

  • How many users would concurrently be accessing the services through the mobile apps?
  • What is the latency for the service calls?
  • How much effort is it for the service to generate the data?
  • How often are the services taken down for maintenance? For how long?
  • Will the services change over time as backend systems change?

These are relatively simple questions, but they serve to shape the approach you take to scale. To provide the best possible mobile experience, we envisioned a brokering capability to be served by Azure. All mobile apps across iOS, Android, and Universal Apps would access this brokering layer for data access. This brokering layer is caching data from IMSA services for fast access.

There is immense flexibility in how you shape solutions in Azure for scale, particularly around caching. Ultimately the purpose of data caching is to minimize the number of trips to the backend services. There can be instances where the backend services are so expensive in time and resources to call that the architecture must do everything possible to minimize the user paying the price of waiting for that call to complete. In this case, Azure can be setup to actively keep its cache fresh and minimize the amount of calls to the backend services. Mobile apps would then always have a fast and fluid experience and never feel slow, and a company would not have to worry about putting a massive amount of resources for scaling up their backend services.

Fortunately, this was not the case for us and the IMSA backend services. The backend services are responsive and data is small per service call. Also, it is not expensive for the backend services to produce the data. Even in this case, there is benefit to leveraging Azure. IMSA race events are at key moments in time, and traffic heavily spikes around each event. It is not beneficial to have hardware laying around mostly idle 90%+ of the time waiting for the spike in usage. Additionally, the IMSA services could be taken down briefly for maintenance. Using Azure for brokering calls still has merit because capability can be scaled up and down around the IMSA events. There will be minimal additional load put on the backend services because Azure is doing most of the work of serving data to the mobile apps.

The approach we took for IMSA relied on a combination of HTTP output caching (via ETag) and Azure Redis Cache all within Azure Mobile Services. Basically, when a mobile app makes a request from an Azure service for the first time, no ETag is present because our services did not already generate it. However, we have the URL and parameters passed in, which forms a unique key to the requested data. Redis cache is checked to see if the data is present. If the data is present and is not expired, then the cached data from Redis is returned. If the data is not present or is expired in Redis, then Azure makes the request into the backend IMSA services, puts the response into the cache, and returns it to the calling mobile app. An ETag is generated with each response, so if the mobile app requests the same data again that ETag is supplied. This is informing our Azure services that the calling mobile app has data already, but is not sure if the data is still valid. The benefit of supplying the ETag is that we can check whether or not the ETag has expired, meaning the related data in cache has expired. If it has not expired, an HTTP 304 is returned which is much lighter weight response than if the cached data was returned.

There is a downside to this approach. When simultaneous requests are made for the exact same data (based on the URL and the parameters passed in) at the exact same moment, each request could do the full trip to the backend IMSA services. If IMSA had millions of users during each event, we would prevent this by doing data locking within Redis, but they do not so the extra engineering to prevent this is not warranted.

Through this technique, we have set ourselves up to be prepared for tens of thousands of new users at each event without bringing the IMSA services to their knees.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>