IMSA MOBILE APPS – 3 – SCALING WITH AZURE

When taking your presence to mobile, there is always a scalability conversation that quickly occurs. This is especially true when the systems you need to access are on-premise. Your on-premise systems may have never been designed for the user load you would add with mobile apps. Additionally, your on-premise systems may not even be exposed to the internet, introducing a whole set of security complexities that need to be solved. In the case of IMSA, we are relying on services already exposed to the internet, so one less set of issues to manage.

Through our build experiences with Azure projects such as CNN, we knew several considerations would apply. The services referenced below are those supplied directly from IMSA:

  • How many users would concurrently be accessing the services through the mobile apps?
  • What is the latency for the service calls?
  • How much effort is it for the service to generate the data?
  • How often are the services taken down for maintenance? For how long?
  • Will the services change over time as backend systems change?

These are relatively simple questions, but they serve to shape the approach you take to scale. To provide the best possible mobile experience, we envisioned a brokering capability to be served by Azure. All mobile apps across iOS, Android, and Universal Apps would access this brokering layer for data access. This brokering layer is caching data from IMSA services for fast access.

There is immense flexibility in how you shape solutions in Azure for scale, particularly around caching. Ultimately the purpose of data caching is to minimize the number of trips to the backend services. There can be instances where the backend services are so expensive in time and resources to call that the architecture must do everything possible to minimize the user paying the price of waiting for that call to complete. In this case, Azure can be setup to actively keep its cache fresh and minimize the amount of calls to the backend services. Mobile apps would then always have a fast and fluid experience and never feel slow, and a company would not have to worry about putting a massive amount of resources for scaling up their backend services.

Fortunately, this was not the case for us and the IMSA backend services. The backend services are responsive and data is small per service call. Also, it is not expensive for the backend services to produce the data. Even in this case, there is benefit to leveraging Azure. IMSA race events are at key moments in time, and traffic heavily spikes around each event. It is not beneficial to have hardware laying around mostly idle 90%+ of the time waiting for the spike in usage. Additionally, the IMSA services could be taken down briefly for maintenance. Using Azure for brokering calls still has merit because capability can be scaled up and down around the IMSA events. There will be minimal additional load put on the backend services because Azure is doing most of the work of serving data to the mobile apps.

The approach we took for IMSA relied on a combination of HTTP output caching (via ETag) and Azure Redis Cache all within Azure Mobile Services. Basically, when a mobile app makes a request from an Azure service for the first time, no ETag is present because our services did not already generate it. However, we have the URL and parameters passed in, which forms a unique key to the requested data. Redis cache is checked to see if the data is present. If the data is present and is not expired, then the cached data from Redis is returned. If the data is not present or is expired in Redis, then Azure makes the request into the backend IMSA services, puts the response into the cache, and returns it to the calling mobile app. An ETag is generated with each response, so if the mobile app requests the same data again that ETag is supplied. This is informing our Azure services that the calling mobile app has data already, but is not sure if the data is still valid. The benefit of supplying the ETag is that we can check whether or not the ETag has expired, meaning the related data in cache has expired. If it has not expired, an HTTP 304 is returned which is much lighter weight response than if the cached data was returned.

There is a downside to this approach. When simultaneous requests are made for the exact same data (based on the URL and the parameters passed in) at the exact same moment, each request could do the full trip to the backend IMSA services. If IMSA had millions of users during each event, we would prevent this by doing data locking within Redis, but they do not so the extra engineering to prevent this is not warranted.

Through this technique, we have set ourselves up to be prepared for tens of thousands of new users at each event without bringing the IMSA services to their knees.

IMSA Mobile Apps – 2 – Planning For Maximizing Code Re-Use Across iOS, Android, and Universal Apps

While we were busy thinking through the interaction design elements of new IMSA mobile apps, we knew we were going to have to build six apps (iOS, Android, Universal Apps for phone and tablet). The app architecture we chose to follow for this is Model-View-ViewModel (or MVVM). Our mission was to maximize the amount of code sharing across all implementations of the app. The more code sharing we could do, the less code we would have to develop for each platform. Less code means less time to develop and less to test, making the aggressive schedule more achievable.

The Model layer contains the business logic and data that will drive the IMSA app. Data would be served through a scalable cloud infrastructure being constructed for the mobile apps. Regardless of mobile OS, the business logic and data will remain the same. How we access the data in the cloud and retrieve would also remain the same. These layers are devoid of any user interface elements are a logical candidate for re-use across all the mobile operating systems. Perfect – one layer to write one. But we want more.

We were suspicious that the View layer would be so unique across mobile operating systems that the ViewModel layer would not be re-usable. The ViewModel layer is responsible for binding the Model layer (the business logic) to a View (the user interface). Remember we are talking about code sharing across iOS, Android, and Universal Apps – this have to be do so different that writing a consistent and shareable ViewModel layer would not be possible – right? Wrong! After some initial prototyping we were pleasantly surprised. The path we have chosen is going to allow us to use the same code in the ViewModel layer across all operating systems.

From our early calculations, thanks to Visual Studio and Xamarin, we are predicting to see about 75% code re-use (of the non-generated code) across all the implementations! Excellent news for the developers and project manager. We’ll dive into code examples in an upcoming blog, but next we’ll discuss our approach with Azure.  Also, this video has additional information for code re-use with Xamarin.

IMSA Mobile Apps – 1 – Architecture Design Session

The IMSA Mobile Apps project is currently in flight and we are actively working on building this cross platform/cross OS solution. This article is the first in a series of blogs discussing the details of the project, and we’ll be actively trying to catch up to the current day as we are busy building towards the Laguna Seca race.

Back in the first week of December 2014 we flew out to Florida to visit the IMSA team with Microsoft in Daytona Beach. Microsoft was hosting an Architecture Design Session, or ADS for short, to flesh out features of the solution. It became quickly apparent that the solution was layered and complex. Many features discussed have become part of longer product roadmap as IMSA is committed to providing the best experience possible to their fans. Also, it should be noted as in all ideation sessions, some ideas discussed are put deep down in the feature backlog.

I am certain that some would ask why IMSA involved Microsoft. This is a mobile app – what does Microsoft know about building mobile apps across iOS and Android? Well, it turns out quite a lot. From past projects, we already knew the tooling we get with Visual Studio and Xamarin allows us to build amazing mobile apps across all platforms and OS’s. The other side of the coin is the plumbing we get to integrate into cloud infrastructure. This app needed to scale across the huge IMSA fan base during live events. From past projects we knew how effective we could be building scalable mobile apps with Azure. So to IMSA and to us, involving Microsoft made perfect sense.

In the ADS, some of the interesting features starting popping up:

The app would need to change shape depending on whether or not a race is live or not. We thought treating the app almost like the NFL Now app would be interesting. There could be something interesting always to watch on our app, regardless if an event is live or not.

IMSA radio is a live audio stream. The app would need to deliver this feed just like other integrated audio content on your device. So turning on IMSA radio, putting your headphones on, and then your device in your pocket should be as natural as playing music.

Using the device’s GPS, if the race fan is at the event the app should respond differently than if the person were elsewhere. When you are at an event, what you are interested in is different than when you are not.

Telemetry information from the cars. It would be just awesome to watch your favorite car at the event or at home and see all the g-forces they are pulling when they are flying around the corners.

IMSA services to content and structured information are not scalability to a mobile play. A cloud infrastructure would need to be placed in front of the IMSA services so content could be cached and served more quickly.

 

After the ADS we went home and decomposed all the features while looking at the schedule. We needed to pick a race event to target deployment. We had a lot of homework to determine our approach. In this next blog we will be discussing how we were planning on maximizing code re-use across all platforms and OS’s.