Getting Started with Azure Mobile Apps

Earlier this year I worked on an awesome project for IMSA. While the project was full of challenges such as Xamarin.Forms there was one part of the project that ran smoother than the rest. That part was our Azure Mobile Service. Since completing that project Azure has released a new product named Azure Mobile Apps which is an even more comprehensive solution for creating Web APIs for mobile apps. The new name is a bit confusing, but rest assured Azure Mobile Apps does not refer to the actual App package running on a device.

Why?

Before Azure Mobile Apps and Azure Mobile Services the work a developer had to perform to provide a simple API to their mobile app was quite extensive. Standing up a website with a RESTful API was only step 1. If you wanted to see metrics about how healthy your API was you’d have to go find a solution from another provider and integrate it manually. You’d have multiple places to manage your mobile app. Your database might have lived in Azure, but it’s wouldn’t have been related to any grouping of services. The ability to manage this world effectively became unreasonable and difficult very quickly.

Azure Mobile Apps provide all the necessary services to provide and support an API for your mobile app. It also groups all those services in one easy to manage location in the Azure Preview Portal. In my experience the provided services and grouping of those services has provided critical time saving and sanity preserving assistance compared to previous solutions. This is especially true because of the awesome integration with Visual Studio.

What does it do?

Azure Mobile Apps provide a suite of Azure services which are used for working with mobile app development and support. You can read their details here. All of the services are grouped into a single Resource Group. This organization is awesome! It helps you comprehend and manage all of the services used by your mobile app effectively. The default services included are:

  • Web App (aka: A website which hosts Web API Controllers)
  • Application Insights (For the Web App)

You can also add in any number of services into this Resource Group. For example you’ll probably add a Database Server and Database to store the data for you mobile app. Adding theses services allow you to leverage Entity Framework in your mobile app, which is a really cool feature.

How much does it cost?

It would be silly for me to try to repeat the values quoted my Microsoft since Azure changes things quite frequently. You can find the official pricing here. Everyone has a different definition of “affordable”, so I’ll only say that in our experience here at InterKnowlogy this service has been worth every penny!

How do I set it up?

Go to the Preview Portal. In the bar on the left click the “+ New” button. Select the “Web + Mobile” category. Select the Mobile App offering. Fill out all the data. I suggest creating a new Resource Group specifically named for this Mobile App. In my demo I’m creating a Secret Santa app so my Resource Group name is SecretSanta and my Web App name is SecretSantaApp.

image

Click the Create button and Azure will now create all the services mentioned before in a single Resource Group. When Azure notifies you that it’s done you’ll see something like this when you navigate to your Resource Group.

image

You can modify what level of service you want to use for your Web App. From the Resource Group blade click your Web App which is the item with the blue sphere. You’ll see a pair of blades like this:

image

On the right you’ll see Application Settings where you can modify any AppSetting or ConnectionString that is specific to this deployment in Azure. These settings persist in Azure across publishes. This means that you can have AppSettings and ConnectionsStrings for Local development that do not get used in Azure.

To change you Web App’s service plan look under the Essentials section and click the link under the header “App Service plan/pricing tier” which will open the following blade:

image

Click the “Pricing tier” block to open the “Choose your pricing tier” blade for different plan tiers.

image

In the image above you only see a few of the many options. For this demo I selected F1 Free which is a very limited plan, but it doesn’t cost anything which is nice for a demo or proof of concept. You can change your plan at any time. After selecting your desired plan click Select and your plan will be applied.

Now there is a bug in the link to Quickstart at the time of this writing. So at this point if you’re following along, close all the open blades using the ‘x’ button in the top right of the dark blue header.

Reopen your Resource Group and then open your Web App again. This time pay attention to the far right Quickstart blade which should look like this:

image

Azure has provided us with an awesome set of getting started tools for a bunch of different platforms. For the sake of this demo we’ll stick with Windows C#. Click on the Windows C# option. You should now see:

image

You’ll now notice that step 1 is to get Visual Studio 2015 installed. Microsoft offers the Community Edition straight up here in case you don’t have it yet. Step 2 allows you to download a starting point for your service in the cloud. Download this code and explore it. There is a lot going on. We will not cover the source in this demo. The source in basically a Web API project where you can add your Web API Controllers and build it out like you would any other Web API project. Lastly step 3 allows you to download an app that is ready to use with the Web API project you downloaded in step 2. This is an awesome starting point. The project in step 3 for Windows C# is a Windows 8.1 Universal App. If you want to do UWP and/or switch to another language you can do that just fine. At the top of this blade is a toggle button. Toggle it to Connect an Existing App and then follow the instructions to wire up your custom app that you created to the Mobile App service in Azure.

Finally I want to walk you through getting the Database to set up. At the top of this blade you’ll notice a message stating you need a database in order to complete the quick start. Click that banner. It will open a Data Connections blade. We’ll create a new data connection. Click the Add button. Follow the wizard to fill out all required data. Be sure to select “Create a new database” not “Use an existing database” if you want a new place to store your data. If you already have data that you will now leverage in this Mobile App then point at your existing database. Also, make sure you’re aware which pricing tier you’re selecting for your database. The default is standard which for most demos may cost too much.

image

After your database is created navigate back to your Resource Group. It should look something like this now:

image

You’ll notice the 2 new items. The Database Server ands the Database itself. Now you’re Azure Mobile App is ready for use.

Let’s run it!

You’ve downloaded both the Server and the Mobile App. Before you can use the server app you need to deploy it. The sample code you downloaded is ready for upload with all the settings matching your Mobile App settings. Azure does not auto deploy the server code for you because you will always change it for real projects. Go ahead and deploy the project. Follow the instructions in Microsoft’s Tutorial under the heading “Publish the server project to Azure.”

Limitation Notice

In order for your app to access the database using the default settings you need to modify your ConnectionString named MS_TableConnectionString to use admin credentials not user credentials. This is because Entity Framework code first requires permissions to create a schema for your database. Once the server code is deployed go back to the Azure Portal and open the Web App again. Open the Settings blade and select Application Settings. Modify your connection string MS_TableConnectionString to use admin credentials.

image

Save your changes. Then try out your API. You can try it out in the browser by hitting the URL for your Mobile App and then adding on /tables/todoitem. For example: http://secretsantaapp.azurewebsites.net/tables/todoitem. You should see JSON output similar to this:

image

Once this is working you can run the demo app that you downloaded from Azure. The first load of the app should look like this:

image

WOOT WOOT! Demo DONE! You’re now an expert!

The take away…

While it may have taken 15-30 min. to walk through this demo, after becoming familiar with the creation process Azure Mobile Apps take very little time and effort to get up and running. I’m very grateful for this solution. Here at InterKnowlogy we have used Azure Mobile Apps on many different projects in order to support mobile apps on Windows, Android, and iOS. Maintenance has been very straight forward and the flexibility is great. It feels like such a small thing, but has a profound impact on creating successful mobile app solutions. If you have any questions feel free to reach out on twitter @dannydwarren or here in the comments.

How I met Alexa

On my second week at Interknowlogy they gave me a great task :) work around with Amazon’s Echo, which is a voice command device from Amazon.com that can answer you questions, play music and control smart devices. This device responds to the name “Alexa” (you can configure it to respond to Amazon too). So my main job was to try to find out how Alexa works.

Alexa’s knowledge is given from her skills which you can add at the developer site of amazon . To access this site, you have to register and go to the  Apps&Services section and get started in the Alexa Skill Kit.

 

Capture

Each skill has a Name, Invocation Name (how users will interact with the service), Version and an Endpoint. The last option is the URL service that will provide all the info of your skill to Alexa, you can choose between two options for your endpoint, you can decide between Lambda ARN (Amazon Resouce Name), that has to be developed with Java or Node.js, or your own service, the only thing that amazon requires its that it has to be a HTTPS server.

So I am going to explain how to do an Alexa skill in C# using your own server.

The technologies I used are:

  • ASP.NET Web API
  • Azure Websites

So, to create your Web API you follow the next steps:

Create new project -> ASP.NET Web Application

11-e1438811615167

Choose the Web API option ( from ASP.NET 4.5.2 Templates) and connect it to an Azure Web App.

2

And then you have to Configure all your Azure Website stuff with your credentials to be ready to code :)

 

2015-08-12 (1)

So what we want to do is to create a Controller that will have a post method and this method will receive all the requests from Alexa.


[Route( "alexa/hello" )]
[HttpPost]
public async Task<HttpResponseMessage> SampleSession()
{
   var speechlet = new GetMatchesSpeechlet();
   return await speechlet.GetResponse( Request );
}

To understand all these requests I used AlexaSkillsKit that is a NuGet package made by FreeBusy which you can find in this link, it’s pretty great, it helps you understand how Alexa works. Once you install that package you want to create a class that will derive from Speechlet (FreeBusy’s class) and override the following methods :)

  • OnSessionStarted

private static Logger Log = NLog.LogManager.GetCurrentClassLogger();

public override void OnSessionStarted(SessionStartedRequest request, Session session)
{
    Log.Info("OnSessionStarted requestId={0}, sessionId={1}", request.RequestId, session.SessionId);
}
  • OnLaunch

This method is called at the start of the application.


public override SpeechletResponse OnLaunch(LaunchRequest request, Session session)

{
    Log.Info("OnLaunch requestId={0}, sessionId={1}", request.RequestId, session.SessionId);
    return GetWelcomeResponse();
 }

 private SpeechletResponse GetWelcomeResponse()
 {
    string speechOutput =
       "Welcome to the Interknowlogy's Yellowjackets app, How may I help you?";
    return BuildSpeechletResponse("Welcome", speechOutput, false);
 }
  • OnIntent

This method will identify the intent that Alexa needs, so the API will receive an Intent with a name and it will be compared to do some action.


private const string NAME_SLOT = "name";

public override async Task<SpeechletResponse> OnIntent(IntentRequest request, Session session)
{
   Log.Info("OnIntent requestId={0}, sessionId={1}", request.RequestId, session.SessionId);

   Intent intent = request.Intent;
   string intentName = (intent != null) ? intent.Name : null;

   if ("LastScore".Equals(intentName))
   {

      //method that does some magic in my backend
      return await GetLastMatchResponse();

   }

   if ("GetPlayerLastScore".Equals(intentName))
   {

      Dictionary<string, Slot> slots = intent.Slots;
      Slot nameSlot = slots[NAME_SLOT];
      //method that does some magic in my backend
      return await GetLastMatchPlayerResponse(nameSlot.Value);

   }

   else
   {
      return BuildSpeechletResponse("Hey", "What can I help you with?", false);
   }

   throw new SpeechletException("Invalid Intent");
}

Intents names are preset at amazon’s developer site with a json, and the example is the following:

Capture

In this page you will preset an Intent Schema which is in a Json format:


{
 "intents": [
 {
 "intent": "LastScore",
 "slots": []
 },
 {
 "intent": "GetPlayerLastScore",
 "slots": [
{
 "name": "name",
 "type": "LITERAL"
 }]
}

This json format text describes that there will be two available intents, one named “LastScore” and the other one “GetPlayerLastScore”, and the last one recieves a “field” that is text.

Now the question is… How do I define the sentences and the parameters that the user of my echo will tell to alexa? In the same page of the developer site is a field that let you provide examples of the usage of Alexa. Which are the following:

 

1

Taking the last samples the interaction with alexa will be the following:

Alexa, ask Yellowjackets..

Welcome to the Interknowlogy’s Yellowjackets app, How may I help you?

What was the last ping pong score for Kevin?

…..

 

Build Responses

To Build responses (in custom methods) you will need to build a Speechlet response that contains a speech response and also can contain a card (optional), that is a graphic representation of a response of alexa that is displayed in the Echo App which is available online and in the following mobile OS:

  • Fire OS 2.0 or higher
  • Android 4.0 or higher
  • iOS 7.0 or higher

The only available cards for devs (right know) are the Simple cards, that only contain plain text. Cards with pictures are not open for non-amazon devs.

gscompanionappcard

 

To create a Speechlet response we will be using the cool Freebusy’s Nuget package class SpeechletResponse like this:


 private SpeechletResponse BuildSpeechletResponse(string title, string output, bool shouldEndSession)
 {
    // Create the Simple card content.
    SimpleCard card = new SimpleCard();
    card.Title = String.Format("YellowJackets - {0}", title);
    card.Subtitle = String.Format("YellowJackets - Sub Title");
    card.Content = String.Format("YellowJackets - {0}", output);

    PlainTextOutputSpeech speech = new PlainTextOutputSpeech();
    speech.Text = output;

    // Create the speechlet response.
    SpeechletResponse response = new SpeechletResponse();
    response.ShouldEndSession = shouldEndSession;
    response.OutputSpeech = speech;
    response.Card = card;
    return response;
 }

The next steps will be to complete the skill configuration at amazon’s developer site.  Which will be the SSL certificate, that will be pointing to my domain, because our WEB Api is a wildcard certificate from Azure.

Also you need to enable test mode to check out your service and Alexa’s interaction in your Amazon Echo.

So, there it is, hope this blog helped you in your adventure with Alexa (Amazon Echo) :)

Crossing The Finish Line With IMSA

This slideshow requires JavaScript.


As Tim called out the other day, we recently went live with a brand new mobile experience for IMSA, and I had the privilege of leading the engineering team on the project here at IK. The scope and timeframe of the project were both ambitious: we delivered a brand-new, content-driven mobile app with live streaming audio and video, realtime in-race scoring results, custom push notifications and more, across all major platforms (iOS, Android, and Windows), with custom interfaces for both phone and tablet form factors – all in a development cycle of about twelve weeks. It goes without saying that the fantastic team of engineers here at IK are all rockstars, but without some cutting-edge development tools and great partnerships, this would have been impossible to get across the finish line on time:

  • Xamarin allowed our team to utilize a shared codebase in a single language (C#, which we happen to love) across all of our target platforms, enabling massive code reuse and rapid development of (effectively) six different apps all at once.
  • Working closely with the team at Xamarin enabled us to leverage Xamarin.Forms to unlock even further code-sharing than would have been otherwise possible, building whole sections of the presentation layer in a single, cross-platform XAML-based UI framework.
  • On the server side, our partners at Microsoft’s continued world-class work on Azure made utilizing Mobile App Service (née Azure Mobile Services) a no-brainer. The ability to scale smoothly with live race-day traffic, the persistent uptime in the face of tens of thousands of concurrent users making millions of calls per day, and the ease of implementation across all three platforms, all combined to save us countless hours of development time versus a conventional DIY approach to the server layer.
  • Last but not least, being able to leverage Visual Studio’s best-of-breed suite of developer tools was essential to the truly heroic amounts of productivity and output of our engineering team at crunch time. And Visual Studio Online enabled the Project Management team and myself to organize features and tasks, track bugs, and keep tabs on our progress throughout the hectic pace of a fast development cycle.

The final result of this marriage between cutting-edge cross-platform technology and an awesome team of developers is a brand new app experience that’s available on every major platform, phone or tablet, and this is just the beginning – we have lots of great new features in store for IMSA fans worldwide. I’ll be following up with a couple more technical posts about particular challenges we faced and how we overcame them, and be sure to check out the next IMSA event in Detroit the weekend of May 29-30; I know I’ll be streaming the live coverage from my phone!

IMSA MOBILE APPS – 3 – SCALING WITH AZURE

When taking your presence to mobile, there is always a scalability conversation that quickly occurs. This is especially true when the systems you need to access are on-premise. Your on-premise systems may have never been designed for the user load you would add with mobile apps. Additionally, your on-premise systems may not even be exposed to the internet, introducing a whole set of security complexities that need to be solved. In the case of IMSA, we are relying on services already exposed to the internet, so one less set of issues to manage.

Through our build experiences with Azure projects such as CNN, we knew several considerations would apply. The services referenced below are those supplied directly from IMSA:

  • How many users would concurrently be accessing the services through the mobile apps?
  • What is the latency for the service calls?
  • How much effort is it for the service to generate the data?
  • How often are the services taken down for maintenance? For how long?
  • Will the services change over time as backend systems change?

These are relatively simple questions, but they serve to shape the approach you take to scale. To provide the best possible mobile experience, we envisioned a brokering capability to be served by Azure. All mobile apps across iOS, Android, and Universal Apps would access this brokering layer for data access. This brokering layer is caching data from IMSA services for fast access.

There is immense flexibility in how you shape solutions in Azure for scale, particularly around caching. Ultimately the purpose of data caching is to minimize the number of trips to the backend services. There can be instances where the backend services are so expensive in time and resources to call that the architecture must do everything possible to minimize the user paying the price of waiting for that call to complete. In this case, Azure can be setup to actively keep its cache fresh and minimize the amount of calls to the backend services. Mobile apps would then always have a fast and fluid experience and never feel slow, and a company would not have to worry about putting a massive amount of resources for scaling up their backend services.

Fortunately, this was not the case for us and the IMSA backend services. The backend services are responsive and data is small per service call. Also, it is not expensive for the backend services to produce the data. Even in this case, there is benefit to leveraging Azure. IMSA race events are at key moments in time, and traffic heavily spikes around each event. It is not beneficial to have hardware laying around mostly idle 90%+ of the time waiting for the spike in usage. Additionally, the IMSA services could be taken down briefly for maintenance. Using Azure for brokering calls still has merit because capability can be scaled up and down around the IMSA events. There will be minimal additional load put on the backend services because Azure is doing most of the work of serving data to the mobile apps.

The approach we took for IMSA relied on a combination of HTTP output caching (via ETag) and Azure Redis Cache all within Azure Mobile Services. Basically, when a mobile app makes a request from an Azure service for the first time, no ETag is present because our services did not already generate it. However, we have the URL and parameters passed in, which forms a unique key to the requested data. Redis cache is checked to see if the data is present. If the data is present and is not expired, then the cached data from Redis is returned. If the data is not present or is expired in Redis, then Azure makes the request into the backend IMSA services, puts the response into the cache, and returns it to the calling mobile app. An ETag is generated with each response, so if the mobile app requests the same data again that ETag is supplied. This is informing our Azure services that the calling mobile app has data already, but is not sure if the data is still valid. The benefit of supplying the ETag is that we can check whether or not the ETag has expired, meaning the related data in cache has expired. If it has not expired, an HTTP 304 is returned which is much lighter weight response than if the cached data was returned.

There is a downside to this approach. When simultaneous requests are made for the exact same data (based on the URL and the parameters passed in) at the exact same moment, each request could do the full trip to the backend IMSA services. If IMSA had millions of users during each event, we would prevent this by doing data locking within Redis, but they do not so the extra engineering to prevent this is not warranted.

Through this technique, we have set ourselves up to be prepared for tens of thousands of new users at each event without bringing the IMSA services to their knees.

IMSA Mobile Apps – 2 – Planning For Maximizing Code Re-Use Across iOS, Android, and Universal Apps

While we were busy thinking through the interaction design elements of new IMSA mobile apps, we knew we were going to have to build six apps (iOS, Android, Universal Apps for phone and tablet). The app architecture we chose to follow for this is Model-View-ViewModel (or MVVM). Our mission was to maximize the amount of code sharing across all implementations of the app. The more code sharing we could do, the less code we would have to develop for each platform. Less code means less time to develop and less to test, making the aggressive schedule more achievable.

The Model layer contains the business logic and data that will drive the IMSA app. Data would be served through a scalable cloud infrastructure being constructed for the mobile apps. Regardless of mobile OS, the business logic and data will remain the same. How we access the data in the cloud and retrieve would also remain the same. These layers are devoid of any user interface elements are a logical candidate for re-use across all the mobile operating systems. Perfect – one layer to write one. But we want more.

We were suspicious that the View layer would be so unique across mobile operating systems that the ViewModel layer would not be re-usable. The ViewModel layer is responsible for binding the Model layer (the business logic) to a View (the user interface). Remember we are talking about code sharing across iOS, Android, and Universal Apps – this have to be do so different that writing a consistent and shareable ViewModel layer would not be possible – right? Wrong! After some initial prototyping we were pleasantly surprised. The path we have chosen is going to allow us to use the same code in the ViewModel layer across all operating systems.

From our early calculations, thanks to Visual Studio and Xamarin, we are predicting to see about 75% code re-use (of the non-generated code) across all the implementations! Excellent news for the developers and project manager. We’ll dive into code examples in an upcoming blog, but next we’ll discuss our approach with Azure.  Also, this video has additional information for code re-use with Xamarin.

IMSA Mobile Apps – 1 – Architecture Design Session

The IMSA Mobile Apps project is currently in flight and we are actively working on building this cross platform/cross OS solution. This article is the first in a series of blogs discussing the details of the project, and we’ll be actively trying to catch up to the current day as we are busy building towards the Laguna Seca race.

Back in the first week of December 2014 we flew out to Florida to visit the IMSA team with Microsoft in Daytona Beach. Microsoft was hosting an Architecture Design Session, or ADS for short, to flesh out features of the solution. It became quickly apparent that the solution was layered and complex. Many features discussed have become part of longer product roadmap as IMSA is committed to providing the best experience possible to their fans. Also, it should be noted as in all ideation sessions, some ideas discussed are put deep down in the feature backlog.

I am certain that some would ask why IMSA involved Microsoft. This is a mobile app – what does Microsoft know about building mobile apps across iOS and Android? Well, it turns out quite a lot. From past projects, we already knew the tooling we get with Visual Studio and Xamarin allows us to build amazing mobile apps across all platforms and OS’s. The other side of the coin is the plumbing we get to integrate into cloud infrastructure. This app needed to scale across the huge IMSA fan base during live events. From past projects we knew how effective we could be building scalable mobile apps with Azure. So to IMSA and to us, involving Microsoft made perfect sense.

In the ADS, some of the interesting features starting popping up:

The app would need to change shape depending on whether or not a race is live or not. We thought treating the app almost like the NFL Now app would be interesting. There could be something interesting always to watch on our app, regardless if an event is live or not.

IMSA radio is a live audio stream. The app would need to deliver this feed just like other integrated audio content on your device. So turning on IMSA radio, putting your headphones on, and then your device in your pocket should be as natural as playing music.

Using the device’s GPS, if the race fan is at the event the app should respond differently than if the person were elsewhere. When you are at an event, what you are interested in is different than when you are not.

Telemetry information from the cars. It would be just awesome to watch your favorite car at the event or at home and see all the g-forces they are pulling when they are flying around the corners.

IMSA services to content and structured information are not scalability to a mobile play. A cloud infrastructure would need to be placed in front of the IMSA services so content could be cached and served more quickly.

 

After the ADS we went home and decomposed all the features while looking at the schedule. We needed to pick a race event to target deployment. We had a lot of homework to determine our approach. In this next blog we will be discussing how we were planning on maximizing code re-use across all platforms and OS’s.

What is CORS?

There are lots of instances that an app will need to call a GET/POST request to another domain (from a different domain where the resource originated). Once the web app starts doing the request, the response will throw an “Access-Control-Allow-Origin” error. Then you ask yourself, what now?

One solution is CORS (Cross-origin resource sharing), which allows all resources (like JavaScript) to make cross origin requests.
Here is an example of how to add CORS Rule to allow a request to Azure storage tables using Azure SDK.

1. Build the connection string

string connectionString= "DefaultEndpointsProtocol=https;
AccountName={account name/storage name};
AccountKey={PrimaryKey|SecondaryKey}";

2. Create the CloudTableClient

CloudStorageAccountstorageAccount = CloudStorageAccount.Parse( connectionString);
CloudTableClient client = storageAccount.CreateCloudTableClient();

3. Add CORS Rule
* as wildcard

CorsRule = new CorsRule()
{
  AllowedHeaders = new List<string> { "*" },
  AllowedMethods = CorsHttpMethods.Connect | CorsHttpMethods.Delete | CorsHttpMethods.Get | CorsHttpMethods.Head | CorsHttpMethods.Merge
	| CorsHttpMethods.Options | CorsHttpMethods.Post | CorsHttpMethods.Put | CorsHttpMethods.Trace, 
  //Since we'll only be calling Query Tables, let's just allow GET verb
  AllowedOrigins = new List<string> { "*" }, //This is the URL of our application.
  ExposedHeaders = new List<string> { "*" },
  MaxAgeInSeconds = 1 * 60 * 60, //Let the browswer cache it for an hour
};

4. Add rules to client

ServiceProperties serviceProperties = client.GetServiceProperties();
CorsProperties corsSettings = serviceProperties.Cors;
corsSettings.CorsRules.Add( corsRule );
//Save the rule
client.SetServiceProperties( serviceProperties );
  • After #4, there should already be cors rule connected to an account name.
    In order to double check what cors rules are there for that account name, we can use:

    ServiceProperties serviceProperties = client.GetServiceProperties();
    CorsProperties corsSettings = serviceProperties.Cors;
    

NOTE: If we need to put cors rule for blobs, we will just change CreateCloudTableClient():
CloudBlobClient client = storageAccount.CreateCloudBlobClient();

Azure Table Storage Exceptions with Multiple Table Entity Schemas

 

I’ve been messing with Azure Table Storage recently and needed to create a somewhat nontrivial data model to try some things. 

This data model includes patients, patient addresses (email, IM, postal, phone, etc.) and patient events.  I also wanted to store all of this data in the same table so I had table entities with differing schemas in the same table.

I then wrote some code to created an array of patients and for each patient I added the patient record and a random number of different patient addresses to the table. 

The more I work with Windows Azure the more I come to the conclusion that debugging Windows Azure code is like a Doctor treating a patient, i.e. make random changes and see if that fixes the problem.  Trying to figure out what the problem is from the exception information is like looking in a crystal ball in that it just doesn’t show anything other than what you can imagine.

Executing the code results in what appears to be one of the most common exceptions there is when working with Windows Azure:

{Microsoft.WindowsAzure.StorageClient.StorageExtendedErrorInformation}
    AdditionalDetails: null
    ErrorCode: "InvalidInput"
    ErrorMessage: "0:One of the request inputs is not valid."

 

So my next issue is that I had added multiple records so which one was causing the problem?  To figure that out you need to enumerate the DataServiceRequestException.Response property.  You get an IEnumerable<ChangeOperationResponse> collection from this property.  From my experience so far there only ever appears to ever be one entity in this collection no matter how many records you create that would have had problems.  What you look for is a header by the name of “Content-ID” on the OperationResponse.  The value is the 1 based index of the records that were added before calling TableContext.SaveChangesWithRetries(SaveChangesOptions.Batch).

You can see this a lot better if you use Fiddler to view the input/output of the batch request.  See http://learningbyfailing.com/2009/12/using-fiddler-with-azure-devstorage/ to see how to get Fiddler to display the output.

In my case it kept pointing to the second record that I added (the first was the patient record and the second was an address record).  If I saved the patient record by itself it worked and if in a separate batch I saved the address records it worked.  So in spite of the exception pretty much not telling me what the problem was I came to the conclusion that I can’t mix table entity schemas in a single batch.  Apparently this is a restriction for development storage but will work when using cloud storage.

It’s too bad the exception didn’t tell me this.

Azure 101 – Diagnostics via Configuration

In a previous post I talked about how diagnostics by default are turned ON for Azure roles, and that you should turn them OFF if you don’t want to incur a ton of Azure storage transaction charges.  I finally spent some time diving into the various configuration settings and now understand how to leave diagnostics ON, and adjust their configuration throughout the lifetime of the running Azure instances.  This way, you don’t persist any log messages during “normal” operation, but can then ratchet up the settings to debug an issue, then turn them back off.

An alternative to using the diagnostic APIs in your role’s OnStart method and thereby hardcoding the settings is to use a configuration file that the Azure diagnostics runtime will interrogate on a given interval while your instance is running.  If you make a change to the config, the settings take effect the next time the runtime checks the config file.  There is a lot of info about how to author the diagnostics.wadcfg file and where to put it so that it gets deployed correctly. Here is a lot more info, including the order in which config settings are found. What I could not find however, was information about where the config file gets deployed, and how to change it at runtime.

First off, contrary to what I told you in the previous post, you need to start with diagnostics turned ON.  This deploys a default diagnostics config file to a container in your Azure storage account called wad-control-container/(deploymentId)/instanceRoleFile (example shown below is for a local compute instance – when deployed to Azure, there will be GUID based folders).  By default, the various diagnostic sections in the config file will each have a default transfer period of 0, which (I think) means “don’t persist these diagnostics (logs, crash dumps, event logs, etc) in my storage account”.  If you follow the links above to create a diagnostics.wadcfg file, then any sections in that file override the defaults when your role is deployed.  Additionally, if you have any code in your OnStart method that changes diagnostics settings, they will also be reflected in the runtime config file.  Basically the resulting config file deployed to storage is the merged result of configs and code-based settings you have in your role.

With a source diagnostics.wadcfg file as shown here (contains only a single Logs element)

CropperCapture75

the resulting configuration file deployed to my storage account looks like this (shown here using Azure Storage Explorer). Notice my Logs section got merged with the other default sections (with transfer periods of 0):

CropperCapture76

In my sample app, I have a link that causes my controller to write a couple Trace.TraceXXX messages (one warning and one informational).  With the setup above, I click on the link, causing a few trace messages, and after a minute passes by, I check the WADLogsTable in table storage and see my trace messages (the filter is set to Verbose, so I see both Warning and Informational messages).

CropperCapture77

Now I can upload a new version of the configuration file (keeping the name the same), this time it has the Logs section filter set to Warning.

CropperCapture79

Cause a few more of the same trace messages to get created, wait a minute, and check the WADLogsTable.  This time we only see warnings (the “informational” messages have been filtered out based on our uploaded config file settings).

CropperCapture78

To summarize, this is probably the best way to configure diagnostics since it’s outside of your application code.  You can upload a new configuration file any time and the runtime will adjust to your new settings.  (I should mention that the runtime will poll for configuration changes at an interval based on the value you set in the configuration file).

 
 
 

Remember that Azure Tables have limited property datatype support

Recently I threw some code together to add objects into an Azure table.  I used the class:

	[DataServiceKey("PartitionKey", "RowKey")]
	public class OrderMessage : TableServiceEntity
	{
		public DateTime OrderDate { get; set; }
		public string CustomerName { get; set; }
		public string CreditCard { get; set; }
		public int Quantity { get; set; }
		public decimal CostEach { get; set; }
	}

Upon adding the data to the table using:

	TableServiceContext tableContext = connection.TableClient.GetDataServiceContext();
	tableContext.AddObject(connection.OrderTableName, message);
	DataServiceResponse response = tableContext.SaveChangesWithRetries();

I received the error:

<?xml version="1.0" encoding="utf-8" standalone="yes"?>

<error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">

<code>InvalidInput</code>

<message xml:lang="en-US">One of the request inputs is not valid.</message>

</error>

 

After wasting some time looking at help and Googling I was skimming across some documentation on tables and it happened to list the supported property types for Azure tables.  I knew that they had limited support but not till I looked at that list did it occur to me that I was using the unsupported datatype ‘decimal’.  Modifying the class so that CostEach was of type ‘double’ resolved my problem. 

It sure would be nice if the error was a little more explicit.  I’m sure that somewhere in the Azure code it knows what happened.  I also find it interesting that rather than returning information in the DataServiceResponse it throws an exception.  I don’t see this ability to throw exceptions in the documentation and in fact the documentation says that the return value is:

A DataServiceResponse that contains status, headers, and errors that result from the call to SaveChanges.

On well I guess somebody kinda forgot to update their XML comments on the method with:

/// <exception cref="System.Data.Services.Client.DataServiceClientException">A stealth exception that we won't tell anybody about</exception>

More than once I’ve seen a reminder on blogs to make sure you only use the supported data types on your table entities.   Here’s another reminder for you and *bonk* me!