Crossing The Finish Line With IMSA


As Tim called out the other day, we recently went live with a brand new mobile experience for IMSA, and I had the privilege of leading the engineering team on the project here at IK. The scope and timeframe of the project were both ambitious: we delivered a brand-new, content-driven mobile app with live streaming audio and video, realtime in-race scoring results, custom push notifications and more, across all major platforms (iOS, Android, and Windows), with custom interfaces for both phone and tablet form factors – all in a development cycle of about twelve weeks. It goes without saying that the fantastic team of engineers here at IK are all rockstars, but without some cutting-edge development tools and great partnerships, this would have been impossible to get across the finish line on time:

  • Xamarin allowed our team to utilize a shared codebase in a single language (C#, which we happen to love) across all of our target platforms, enabling massive code reuse and rapid development of (effectively) six different apps all at once.
  • Working closely with the team at Xamarin enabled us to leverage Xamarin.Forms to unlock even further code-sharing than would have been otherwise possible, building whole sections of the presentation layer in a single, cross-platform XAML-based UI framework.
  • On the server side, our partners at Microsoft’s continued world-class work on Azure made utilizing Mobile App Service (née Azure Mobile Services) a no-brainer. The ability to scale smoothly with live race-day traffic, the persistent uptime in the face of tens of thousands of concurrent users making millions of calls per day, and the ease of implementation across all three platforms, all combined to save us countless hours of development time versus a conventional DIY approach to the server layer.
  • Last but not least, being able to leverage Visual Studio’s best-of-breed suite of developer tools was essential to the truly heroic amounts of productivity and output of our engineering team at crunch time. And Visual Studio Online enabled the Project Management team and myself to organize features and tasks, track bugs, and keep tabs on our progress throughout the hectic pace of a fast development cycle.

The final result of this marriage between cutting-edge cross-platform technology and an awesome team of developers is a brand new app experience that’s available on every major platform, phone or tablet, and this is just the beginning – we have lots of great new features in store for IMSA fans worldwide. I’ll be following up with a couple more technical posts about particular challenges we faced and how we overcame them, and be sure to check out the next IMSA event in Detroit the weekend of May 29-30; I know I’ll be streaming the live coverage from my phone!

IMSA MOBILE APPS – 3 – SCALING WITH AZURE

When taking your presence to mobile, there is always a scalability conversation that quickly occurs. This is especially true when the systems you need to access are on-premise. Your on-premise systems may have never been designed for the user load you would add with mobile apps. Additionally, your on-premise systems may not even be exposed to the internet, introducing a whole set of security complexities that need to be solved. In the case of IMSA, we are relying on services already exposed to the internet, so one less set of issues to manage.

Through our build experiences with Azure projects such as CNN, we knew several considerations would apply. The services referenced below are those supplied directly from IMSA:

  • How many users would concurrently be accessing the services through the mobile apps?
  • What is the latency for the service calls?
  • How much effort is it for the service to generate the data?
  • How often are the services taken down for maintenance? For how long?
  • Will the services change over time as backend systems change?

These are relatively simple questions, but they serve to shape the approach you take to scale. To provide the best possible mobile experience, we envisioned a brokering capability to be served by Azure. All mobile apps across iOS, Android, and Universal Apps would access this brokering layer for data access. This brokering layer is caching data from IMSA services for fast access.

There is immense flexibility in how you shape solutions in Azure for scale, particularly around caching. Ultimately the purpose of data caching is to minimize the number of trips to the backend services. There can be instances where the backend services are so expensive in time and resources to call that the architecture must do everything possible to minimize the user paying the price of waiting for that call to complete. In this case, Azure can be setup to actively keep its cache fresh and minimize the amount of calls to the backend services. Mobile apps would then always have a fast and fluid experience and never feel slow, and a company would not have to worry about putting a massive amount of resources for scaling up their backend services.

Fortunately, this was not the case for us and the IMSA backend services. The backend services are responsive and data is small per service call. Also, it is not expensive for the backend services to produce the data. Even in this case, there is benefit to leveraging Azure. IMSA race events are at key moments in time, and traffic heavily spikes around each event. It is not beneficial to have hardware laying around mostly idle 90%+ of the time waiting for the spike in usage. Additionally, the IMSA services could be taken down briefly for maintenance. Using Azure for brokering calls still has merit because capability can be scaled up and down around the IMSA events. There will be minimal additional load put on the backend services because Azure is doing most of the work of serving data to the mobile apps.

The approach we took for IMSA relied on a combination of HTTP output caching (via ETag) and Azure Redis Cache all within Azure Mobile Services. Basically, when a mobile app makes a request from an Azure service for the first time, no ETag is present because our services did not already generate it. However, we have the URL and parameters passed in, which forms a unique key to the requested data. Redis cache is checked to see if the data is present. If the data is present and is not expired, then the cached data from Redis is returned. If the data is not present or is expired in Redis, then Azure makes the request into the backend IMSA services, puts the response into the cache, and returns it to the calling mobile app. An ETag is generated with each response, so if the mobile app requests the same data again that ETag is supplied. This is informing our Azure services that the calling mobile app has data already, but is not sure if the data is still valid. The benefit of supplying the ETag is that we can check whether or not the ETag has expired, meaning the related data in cache has expired. If it has not expired, an HTTP 304 is returned which is much lighter weight response than if the cached data was returned.

There is a downside to this approach. When simultaneous requests are made for the exact same data (based on the URL and the parameters passed in) at the exact same moment, each request could do the full trip to the backend IMSA services. If IMSA had millions of users during each event, we would prevent this by doing data locking within Redis, but they do not so the extra engineering to prevent this is not warranted.

Through this technique, we have set ourselves up to be prepared for tens of thousands of new users at each event without bringing the IMSA services to their knees.

IMSA Mobile Apps – 2 – Planning For Maximizing Code Re-Use Across iOS, Android, and Universal Apps

While we were busy thinking through the interaction design elements of new IMSA mobile apps, we knew we were going to have to build six apps (iOS, Android, Universal Apps for phone and tablet). The app architecture we chose to follow for this is Model-View-ViewModel (or MVVM). Our mission was to maximize the amount of code sharing across all implementations of the app. The more code sharing we could do, the less code we would have to develop for each platform. Less code means less time to develop and less to test, making the aggressive schedule more achievable.

The Model layer contains the business logic and data that will drive the IMSA app. Data would be served through a scalable cloud infrastructure being constructed for the mobile apps. Regardless of mobile OS, the business logic and data will remain the same. How we access the data in the cloud and retrieve would also remain the same. These layers are devoid of any user interface elements are a logical candidate for re-use across all the mobile operating systems. Perfect – one layer to write one. But we want more.

We were suspicious that the View layer would be so unique across mobile operating systems that the ViewModel layer would not be re-usable. The ViewModel layer is responsible for binding the Model layer (the business logic) to a View (the user interface). Remember we are talking about code sharing across iOS, Android, and Universal Apps – this have to be do so different that writing a consistent and shareable ViewModel layer would not be possible – right? Wrong! After some initial prototyping we were pleasantly surprised. The path we have chosen is going to allow us to use the same code in the ViewModel layer across all operating systems.

From our early calculations, thanks to Visual Studio and Xamarin, we are predicting to see about 75% code re-use (of the non-generated code) across all the implementations! Excellent news for the developers and project manager. We’ll dive into code examples in an upcoming blog, but next we’ll discuss our approach with Azure.  Also, this video has additional information for code re-use with Xamarin.

IMSA Mobile Apps – 1 – Architecture Design Session

The IMSA Mobile Apps project is currently in flight and we are actively working on building this cross platform/cross OS solution. This article is the first in a series of blogs discussing the details of the project, and we’ll be actively trying to catch up to the current day as we are busy building towards the Laguna Seca race.

Back in the first week of December 2014 we flew out to Florida to visit the IMSA team with Microsoft in Daytona Beach. Microsoft was hosting an Architecture Design Session, or ADS for short, to flesh out features of the solution. It became quickly apparent that the solution was layered and complex. Many features discussed have become part of longer product roadmap as IMSA is committed to providing the best experience possible to their fans. Also, it should be noted as in all ideation sessions, some ideas discussed are put deep down in the feature backlog.

I am certain that some would ask why IMSA involved Microsoft. This is a mobile app – what does Microsoft know about building mobile apps across iOS and Android? Well, it turns out quite a lot. From past projects, we already knew the tooling we get with Visual Studio and Xamarin allows us to build amazing mobile apps across all platforms and OS’s. The other side of the coin is the plumbing we get to integrate into cloud infrastructure. This app needed to scale across the huge IMSA fan base during live events. From past projects we knew how effective we could be building scalable mobile apps with Azure. So to IMSA and to us, involving Microsoft made perfect sense.

In the ADS, some of the interesting features starting popping up:

The app would need to change shape depending on whether or not a race is live or not. We thought treating the app almost like the NFL Now app would be interesting. There could be something interesting always to watch on our app, regardless if an event is live or not.

IMSA radio is a live audio stream. The app would need to deliver this feed just like other integrated audio content on your device. So turning on IMSA radio, putting your headphones on, and then your device in your pocket should be as natural as playing music.

Using the device’s GPS, if the race fan is at the event the app should respond differently than if the person were elsewhere. When you are at an event, what you are interested in is different than when you are not.

Telemetry information from the cars. It would be just awesome to watch your favorite car at the event or at home and see all the g-forces they are pulling when they are flying around the corners.

IMSA services to content and structured information are not scalability to a mobile play. A cloud infrastructure would need to be placed in front of the IMSA services so content could be cached and served more quickly.

 

After the ADS we went home and decomposed all the features while looking at the schedule. We needed to pick a race event to target deployment. We had a lot of homework to determine our approach. In this next blog we will be discussing how we were planning on maximizing code re-use across all platforms and OS’s.

Could not copy the file “obj\Debug\app.exe” because it was not found

I was in the middle of working on a project and I suddenly started getting the following error:

I tried a complete clean and rebuild. I even tried scorching my workspace but the error would not go away. I read that it might be a problem with Visual Studio extensions so I disabled all my extensions, but no luck.

I then started watching the obj\Debug folder as I did a build. Turns out that the .exe file was getting written, but then it would immediately be deleted before the build could finish. Then I found this:

It turns out that Avast anti-virus thought my program was suspicious and was quarantining it during the build process.  Adding an exclusion for my entire source tree solved this issue.

Next time, I’ll discuss how to solve the issue when installing this same application.

Unit Test filtering for TFS builds using Test Explorer in VS 2012

One of the major new features in Visual Studio 2012 is the Text Explorer tool window, which consolidates 2010’s Test View and Test Results windows and adds support for using different testing frameworks together in one place. There are definitely some positive aspects to Test Explorer in comparison to its predecessors, but as a completely new piece of functionality it unfortunately left out some key features that were previously available.

One of the places it fell short was in filtering of tests to enable running a specific subset of the tests in your solution, especially when working with a set of tests set up for TFS’s Team Build. When working with small sets of tests it could be a minor annoyance, but working with hundreds or thousands of tests made it basically unusable. Thanks to Visual Studio’s new release model of frequent updates, these shortcomings are already starting to be addressed with November’s Update 1.

The preferred method of specifying tests to run with builds in TFS is by using attributes on test methods, specifically the Priority and TestCategory attributes. VS2010’s Test View displays a configurable grid listing all available tests in the open solution.
image

Continue reading

Visual Studio 2012 – What’s in a Version?

.NET Framework and CLR

We know that Visual Studio 2012 ships with and installs the new .NET Framework 4.5, which comes with new features such as portable class libraries, async/await support, async file I/O, and enhancements to W*F to name a few.  This time, instead of installing the .NET framework side-by-side with previous versions, this one REPLACES the previous one (.NET 4).  What does this mean?

.NET 2.0, 3.0, and 3.5 ran on top of the .NET CLR 2.0.  Then .NET came along with its own .NET 4 CLR, but it installed “next to” the 2.0 CLR.  If you run a .NET application targeted at 2.0 through 3.5, your app would be using the .NET 2.0 CLR.  If you run a 4.0 application, it would use the 4.0 CLR.   Now .NET 4.5 comes along, and you would think it would either (a) run against the existing 4.0 CLR, or (b) come with it’s own CLR.  Well, it does neither – it REPLACES the 4.0 CLR with its own, the 4.5 CLR.  Clear as mud?  Here’s a good picture from Microsoft that describes the landscape.

DotNet Versions and CLRs

What gets difficult is identifying what version of the .NET framework you have installed on your machine, either by hand or programmatically at runtime.  Since the 4.5 CLR replaces the previous one, it lives in the same exact place on disk – most of it at %WINDIR%\Microsoft.NET\Framework\v4.0.30319.  This version-based name of the folder did not change between 4.0 and 4.5.  What DID change are the files contained in the folder. 

 

  .NET 4.0 Installed .NET 4.5 Installed
MSCOREI.DLL 4.0.30319.1 4.0.30319.17929
     
  Target .NET 4.0 Target .NET 4.5
Environment.Version 4.0.30319.17929 4.0.30319.17929

Notice the Environment.Version property returns the same thing whether you’re targeting 4.0 or 4.5 framework in your project.  This is because this property returns the CLR version (remember 4.0.30319.17929 means 4.5), not the version of the framework you’re code is targeted to. 

What’s Installed?

You might be asking, “how do I know what’s installed on this machine?”.  In the past, I’ve gotten used to opening the %WINDIR%\Microsoft.NET\Framework directory structure and checking the folder names, but that’s not enough with 4.5.  There are 3 ways that I know of:

  1. Check the FILE VERSION of one of the .NET framework assemblies (MSCOREI.DLL, SYSTEM.DLL, etc)
  2. Check the Programs and Features list for “Microsoft .NET Framework 4.5”.  To complicate matters, it shows a version of 4.5.50709.
  3. Check a gnarly registry key:  HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full.  If there’s a DWORD value named “Release”, then you have .NET 4.5.  If it’s greater or equal to 378389, then you have the final released version of .NET 4.5. 

Code Compatibility

You can run .NET 4.0 code on a machine with .NET 4.5 installed, that’s just standard backwards compatibility.  But can you run .NET 4.5 code on a machine with .NET 4.0?  Since the CLRs are “pretty similar”, only with 4.5 having more features, the 4.0 code will  actually run, but only if you don’t use any 4.5 specific features that are not present in the 4.5 CLR.  For example, if you use the new 4.5 async/await pattern in your code, it will blow up on a machine with only 4.0 installed.

Entity Framework

I mention EF here not because it comes with Visual Studio 2012 (it’s available via NuGet), but because it has some interesting version behavior as well.  EF 5 comes with support for Enum types, which means your database tables will have first hand knowledge of the Enum values, instead of forcing you to use strings and lookup tables.  However, that support for Enum is built upon new functionality in the .NET 4.5 framework.  So if your project is targeting .NET 4, you don’t have Enum support.  What’s tricky about this is the way NuGet sets up the references to Entity Framework for you.  It’s smart about it, but it’s not obvious.

When you use NuGet to add EF support to your project, the installation process detects what .NET version your project is targeting.  If it’s 4.0, your project will reference the 4.4 version of EF, the one WITHOUT support for Enums.  If your project is targeting .NET 4.5, your project will reference the 5.0 version of EF, and WILL have support for Enums.  You can see which one you have based on the version number of the EntityFramework referenced assembly.

Targeting .NET 4.0 Targeting .NET 4.5
CropperCapture236 CropperCapture237

 

Well that’s enough version information for one post.  Hopefully this gives you a hint as to what is going on in your application when you’re running on machines with different versions of the .NET Framework.

My favorite new features in VS11 Beta

After having a week to work with the new Visual Studio Beta, I’m already missing some of the new features when I need to jump back over to 2010. Here are some of my favorites, in no particular order.

Quick Find

image
This is a frequently highlighted new feature, and I’ve definitely found it useful already as a quick shortcut for getting to almost any function in the IDE. My favorite part of it is that it’s so keyboard focused and allows me to skip reaching for the mouse to do things that I would normally just dig into a menu for. As a longtime SlickRun user, the keyboard shortcut to activate it also feels really natural (SlickRun uses Win+Q, Quick Find is Ctrl+Q) and the fast auto-complete search feels similar too.

Incremental Search feels right again

image
Although much of the new functionality was available previously through the Power Tools, a few of the things that got pulled in finally got that extra polish that makes them just feel right. One of the first Power Tools that I had turned off was the new unified Find box that took over a bunch of different functions and keyboard shortcuts. The big problem I had was the way it handled Ctrl+I incremental search. I use this so frequently that it’s a reflex and there were a few hiccups in keyboard focus that caused me to have to stop and figure out how to get back to the state I wanted to be in in the editor. Now, in the beta, this experience has been smoothed out, and the only noticeable difference from the old Ctrl-I functionality are the new visual cues: the box up top showing what I’ve typed, the extra highlighting of all matches in the current window, but the keyboard experience feels the same, and responds just like I expect.

Navigate To (Ctrl+,)

image
One of the things I miss most when working in a bare instance of Visual Studio that doesn’t have my two favorite extensions installed (CodeRush and ReSharper) is quick keyboard navigation to any type in my solution. The new Navigate To window provides a similar (though not identical) experience, with a search while typing listing of types and members, including camel casing caps shorthand, and instant navigation to the selected item.

Out of process solution load and build

This one is pretty obvious, but I know a lot of work went into this and it’s going to be such a dramatic time and frustration saver. Finally those extra cores are getting used on the big workloads.

TFS Local Workspaces

So many cool things about this, and I’ve only scratched the surface so far, but just the way this has been worked into the UI is really impressive. Like if you need to switch back and forth between Local and Server workspaces, there’s one button to click. Doing a Compare to the workspace version of the file just works the same, but doesn’t need to talk to the server at all, so is much faster. No more worrying about accidentally changing a file from outside VS and getting out of sync – now VS is watching and will pick up the change for you. Just all around a well thought out experience and hits a lot of the big annoyances people have with using TFS source control.

New Diff tool

Finally, the Visual Source Safe compare window is gone. A lot here is similar to other diff tools that you could previously point TFS to for your comparisons, but being built in not only skips that extra configuration, but also gives you the full VS editor. Editing a file right in the compare window with full syntax highlighting, Intellisense, and even instant updates of the compare state feels like magic after so much time spent with the old one. The UI isn’t the only part that’s changed either. TFS auto-merging is much improved too and should cut down dramatically on the number of times you even see the merge window.

Copying Files After a Successful Build With Robocopy on VS2010

The Problem
In one of the latest projects I’ve been working on, I needed to be able to interchange data providers. We decided to use Prism and MEF to load up the concrete instance of our IDataProvider at runtime, and created a separate project to contain the XML based implementation of that provider. Since we need a concrete instance to debug I wanted the latest .dll and .pdb of the XML data provider project to be copied into a Modules folder in the app’s output directory after it has successfully built. At first I thought I would use xcopy to copy the files, but after seeing that it has been deprecated, I switched to using Robocopy (which turned out to be just as easy). Once I figured what tool I would use to copy the files, I needed to figure out where to run the command from and I found 2 options.

Option 1: Project Post-build event
The quickest option I found was to enter the Robocopy command into the Post-build Event Command Line box in the project properties. Since Robocopy comes with Windows Vista and on you can open the Command Prompt and type robocopy /? to see everything that it is capable of. The command I used to copy the .dll and .pdb files was the following:
robocopy “$(TargetDir)\” “$(SolutionDir)\Application\$(OutDir)\Modules” $(TargetName).dll $(TargetName).pdb

With that command entered, I saved and then built the project, but the build was not successful and displayed an error staying robocopy exited with a code of 1. Wondering why I received this error, I went to the application’s output directory to find, to my surprise, that the Modules folder had been created and both the .dll and .pdb files had been copied successfully to that folder. Searching around for an answer I found this stackoverflow post explaining that unlike every other command line tool that returns a 0 when successful (which is what Visual Studio expects), robocopy returns a 1 when successful. So after adding a check to exit with a code of 0 if the error level was 1, it then built successfully.

Option 2: MSBuild Extension AfterBuild
On the stackoverflow post, mentioned above, there was an answer that recommended using the Robocopy task in the MSBuild Extension Pack as an AfterBuild target. I found this idea very intriguing (especially since it appeared to handle the return code of 1 issue) and attempted to use it to accomplish the same copying task. After a few hours of working with it I found that I could only get it functioning installing the MSBuild Extension Pack3.5 (even though I am on an x64 machine and building a .Net 4.0 project). I then edited my .csproj file and added the following code, which does the exact same thing as the post-build command, at the bottom of the file. After reloading and building the project, the files were copied to the Modules folder.

<Import Project="$(MSBuildExtensionsPath)\ExtensionPack\MSBuild.ExtensionPack.tasks"/>
<Target Name="AfterBuild">
	<ItemGroup>
		<OutputFiles Include="$(TargetName).dll"/>
		<OutputFiles Include="$(TargetName).pdb"/>
	</ItemGroup>
	<MSBuild.ExtensionPack.FileSystem.RoboCopy Source="$(TargetDir)\" Destination="$(SolutionDir)\Application\$(OutDir)\Modules" Files="@(OutputFiles)">
		<Output TaskParameter="ExitCode" PropertyName="Exit" />
		<Output TaskParameter="ReturnCode" PropertyName="Return" />
	</MSBuild.ExtensionPack.FileSystem.RoboCopy>
	<Message Text="ExitCode = $(Exit)"/>
	<Message Text="ReturnCode = $(Return)"/>
</Target>

The Decision
While Option 2 is very cool and clean looking, I decided to stick with Option 1 for the following reasons:

  1. Option 2 would require all the developers on the project to install the MSBuild Extension pack (not difficult at all but time consuming and annoying if your builds are failing and you didn’t know you needed the 3rd party pack)
  2. Writing the Post-build (Option 1) command was much quicker
  3. If any other developer is wondering how the files are being copied into the Modules folder (or needs to change it), with Option 1 they can quickly and easily look at (and change) the Post-build event in the properties window, but with Option 2 they have to know to go edit the .csproj file and then search it to find the AfterBuild target code

Visual Studio 11 Solution Upgrading

If you’re an early Visual Studio adopter like me you’ve probably gotten used to running into a similar problem every few years: upgrading your code. Although VS usually handles the job of converting files to work in the newer version, the big problem has usually been trying to go back to the old version. If you’re the only one working with the code that might not matter, but if you work on a team that has a mix of, for example, VS 2008 and VS 2010 Beta clients, the conversion process causes problems for one or the other.

Finally in VS 11 this problem is being addressed. The ultimate goal is to be able to open a VS 2010 solution in VS 11 with minimal conversion and applying no breaking changes to the project or solution files that would prevent it from still opening directly in 2010. There will of course be some restrictions, like not upgrading to .NET 4.5, but in general seems like a pretty reasonable goal.

To see how it’s working so far with the Developer Preview I tried upgrading a few of my own projects, including a 50+ project solution with lots of complications. The good news was that the upgrade process did succeed without preventing VS 2010 from opening the converted solutions that I tried. There were, however, some issues.
Continue reading