About the Author

Dan Hanan is a lead software engineer at InterKnowlogy, where he works on a wide variety of software projects related to the Microsoft tech stack. At the center of his universe are .NET, C#, XAML-based technologies such as WPF, Silverlight, Win Phone 7, and Surface, as well as non-UI related bits involving WCF, LINQ, and SQL. In his spare tech-time, he is venturing outside the MS world into the depths of the Android OS. Come back once in a while to check out Dan's random collection of technology thoughts....better yet, subscribe to the RSS feed!

The Real-Time Web With SignalR

Today for RECESS I looked into SignalR.  The SignalR site (which currently is just the github repository) describes it as an “async signaling library for ASP.NET to help build real-time, multi-user interactive web applications.”  What does this mean to you and me?  This library allows you, in just a few lines of code, communicate in real-time amongst browsers that are hitting your site.

Scott Hanselman does a good job of summarizing what SignalR does and how we got here, so I won’t repeat it here.

I followed the super quick sample – a browser based chat application as described Scott’s post.  I ran out of time before I could get the lower level connection working, but the higher level “Hub” based connection works great.

It’s super cool to see it working – I have 3 different browsers (IE, Chrome, Firefox) in the chat together.  When I enter a chat message in one, the other two receive the message instantaneously.

    $("#broadcast").click(function () {
      // send() is the method on the ASP.NET Hub class running on the server
      chat.send($('#msg').val());
    });

The use of dynamic objects in the ASP.NET implementation of the Hub allows me to call any method on the Hub, as long as that function exists in the client side JavaScript.

    public class Chat : Hub
    {
        public void Send( string message )
        {
            // addMessage is a client side javascript function
            Clients.addMessage(message);  
        }
    }

Here’s a couple shots of IE Developer Tools (F12) when running the chat client.

Waiting for a response on the first call to signalr/connect:

CropperCapture81

Got the response from the first connect (a message was broadcast to all the clients), now turn right around and connect to wait for the next “reply” from the server:

CropperCapture82

 


 

Azure 101 – Diagnostics via Configuration

In a previous post I talked about how diagnostics by default are turned ON for Azure roles, and that you should turn them OFF if you don’t want to incur a ton of Azure storage transaction charges.  I finally spent some time diving into the various configuration settings and now understand how to leave diagnostics ON, and adjust their configuration throughout the lifetime of the running Azure instances.  This way, you don’t persist any log messages during “normal” operation, but can then ratchet up the settings to debug an issue, then turn them back off.

An alternative to using the diagnostic APIs in your role’s OnStart method and thereby hardcoding the settings is to use a configuration file that the Azure diagnostics runtime will interrogate on a given interval while your instance is running.  If you make a change to the config, the settings take effect the next time the runtime checks the config file.  There is a lot of info about how to author the diagnostics.wadcfg file and where to put it so that it gets deployed correctly. Here is a lot more info, including the order in which config settings are found. What I could not find however, was information about where the config file gets deployed, and how to change it at runtime.

First off, contrary to what I told you in the previous post, you need to start with diagnostics turned ON.  This deploys a default diagnostics config file to a container in your Azure storage account called wad-control-container/(deploymentId)/instanceRoleFile (example shown below is for a local compute instance – when deployed to Azure, there will be GUID based folders).  By default, the various diagnostic sections in the config file will each have a default transfer period of 0, which (I think) means “don’t persist these diagnostics (logs, crash dumps, event logs, etc) in my storage account”.  If you follow the links above to create a diagnostics.wadcfg file, then any sections in that file override the defaults when your role is deployed.  Additionally, if you have any code in your OnStart method that changes diagnostics settings, they will also be reflected in the runtime config file.  Basically the resulting config file deployed to storage is the merged result of configs and code-based settings you have in your role.

With a source diagnostics.wadcfg file as shown here (contains only a single Logs element)

CropperCapture75

the resulting configuration file deployed to my storage account looks like this (shown here using Azure Storage Explorer). Notice my Logs section got merged with the other default sections (with transfer periods of 0):

CropperCapture76

In my sample app, I have a link that causes my controller to write a couple Trace.TraceXXX messages (one warning and one informational).  With the setup above, I click on the link, causing a few trace messages, and after a minute passes by, I check the WADLogsTable in table storage and see my trace messages (the filter is set to Verbose, so I see both Warning and Informational messages).

CropperCapture77

Now I can upload a new version of the configuration file (keeping the name the same), this time it has the Logs section filter set to Warning.

CropperCapture79

Cause a few more of the same trace messages to get created, wait a minute, and check the WADLogsTable.  This time we only see warnings (the “informational” messages have been filtered out based on our uploaded config file settings).

CropperCapture78

To summarize, this is probably the best way to configure diagnostics since it’s outside of your application code.  You can upload a new configuration file any time and the runtime will adjust to your new settings.  (I should mention that the runtime will poll for configuration changes at an interval based on the value you set in the configuration file).

 
 
 

ASP.NET Universal Providers

Earlier this month, I posted about how the ASP.NET membership providers are creating the required database schema for me automagically when I first hit the site.  Here is a quick update to that statement now that I more thoroughly understand what’s going on.

Scott Hanselman does a great job introducing us to the ASP.NET Universal Providers for Session, Membership, Roles and User Profile so I won’t repeat it here.  What I DON’T get from his article is that the Universal Providers are the default for a new ASP.NET MVC3 project (and that they’re not yet supported on Azure).  I hadn’t touched ASP.NET or MVC for multiple years, so I just went along quietly and created a new project, pointed my connection string at a SQL Server and things were all working.  It wasn’t until I published the site to Azure that things fell apart.

The Universal Providers (“DefaultMembershipProvider”) are referenced throughout the web.config for all the different pieces of membership, and here you see it set as the default provider (the one that membership code in the site will look for and use).

  <membership defaultProvider="DefaultMembershipProvider">
    <providers>
      <clear />
      <add name="AspNetSqlMembershipProvider"
            type="System.Web.Security.SqlMembershipProvider"
            connectionStringName="ApplicationServices"
            enablePasswordRetrieval="false"
            enablePasswordReset="true"
            requiresQuestionAndAnswer="false"
            requiresUniqueEmail="false"
            maxInvalidPasswordAttempts="5"
            minRequiredPasswordLength="6"
            minRequiredNonalphanumericCharacters="0"
            passwordAttemptWindow="10"
            applicationName="/" />
      <add name="DefaultMembershipProvider"
            type="System.Web.Providers.DefaultMembershipProvider, 
                   System.Web.Providers, Version=1.0.0.0, Culture=neutral,
                   PublicKeyToken=31bf3856ad364e35"
            connectionStringName="DefaultConnection"
            enablePasswordRetrieval="false"
            enablePasswordReset="true"
            requiresQuestionAndAnswer="false"
            requiresUniqueEmail="false"
            maxInvalidPasswordAttempts="5"
            minRequiredPasswordLength="6"
            minRequiredNonalphanumericCharacters="0"
            passwordAttemptWindow="10"
            applicationName="/" />
    </providers>
  </membership>

This works fine as long as we were developing on our local machines (where the Universal Providers are installed), and even when we’re hitting the SQL Azure database from our local machine (the provider is local, the database is remote – so the code can create the SQL Azure compatible schema on the fly regardless of the DB location).  When the site is deployed to Azure, where the Universal Providers are not installed, you get an error: Unable to find the requested .Net Framework Data Provider. It may not be installed.

It took forever to figure out that this was the problem, and 1 minute to fix it – just switch web.config to use the SQL Providers that are already configured, just not as the default.

  <membership defaultProvider="AspNetSqlMembershipProvider">

 

The bottom line: the Universal Providers are not yet available on Azure servers, so you have to go with the legacy SQL Providers.  These providers, as always, require you to run the ASPNET_REGSQL tool to create the required database schema before you hit the site.

  • Universal Providers: create their required DB schema on first use, do not have the aspnet_xxx prefix on the tables, and do not use any views or stored procedures.
  • SQL Providers: require you to run ASPNET_REGSQL before first use, the tables are namespaced with “aspnet_” in front, and there are views and stored procedures that go along with the tables (all created by the ASPNET_REGSQL tool)

 
 
 
 

Azure 101 – Billing (when 1 minute equals 5 hours and 2 might equal 10)

I thought I’d start a blog series all about Azure – things I’ve learned while getting a few web sites up and running in the (Microsoft) cloud.  First up – billing.  Microsoft offers a “free” 90-day trial to get your feet wet, but BE CAREFUL, may not be free, especially when you quickly go over your quotas and start getting charged.

A lot of this information is available deep in the dark depths of the Azure SDK help, and you might just tell me to RTFM, but even as a Microsoft support engineer admitted to me:  “there is WAY too much documentation out there, it’s impossible to find anything”.  My thoughts exactly, and this is the reason for this quick post to summarize the information.

There are multiple “billing meters” used to charge you for your Azure usage:  Compute Time, Database Size, Storage Amount, Storage Transactions, Data Transfer (in and out), and a couple others.  I will focus on what we have found to be both the most misunderstood, as well as the meters for which you will most probably go over the free quota if you don’t know what to watch for:  Compute Time and Storage Transactions.

Rule #1 – Compute Hours are NOT just the time your site is WORKING on a request

My first assumption was that if my site is hosted up there, but it’s not being hit very often, I won’t incur many (any?) compute hours.  Turns out, compute hours are a measurement of how many hours your site (more correctly your instances) exist on the Azure servers.  Back when I was starting with Azure, I wrote a super simple ASP.NET MVC “Hello World” app and published it to Azure – super easy.  I first published to staging, incremented the instance count to 2 just to see how easy that was, and then published a new version to production.  Out there, I set the instance count to 3, and learned how to “flip the VIP”, switching staging with production in (not exactly) the blink of an eye.

Fast forward 5 days later, and we get our first billing email, saying that we’ve reached the trial period’s limit of 700 COMPUTE HOURS!  Seriously?  3 instances in production, 2 in staging, but nobody is hitting these sites, nobody would know they’re out there (for that matter the staging one has a GUID in the URL so it’s not discoverable by accident).  Long story short, after further reading, I find that a compute hours is any portion of an hour that any of your instances (staging or production) are deployed to the cloud!

A couple more little gotchas:

  • You get charged the first hour(s) the first minute your app gets deployed.  When the clock strikes 12, the next hour starts.  So if you deploy a single instance at 10:50AM, you will get charged 1 hour for the first 10 minutes, and then at 11:00, the second hour is charged.  If you’re unlucky enough to deploy 5 instances in the 59th minute of an hour, you will get billed for 10 hours in 2 minutes of human time.
  • Each time you deploy your project, a new clock is started.  If you are iteratively trying something out and deploy 3 versions in an hour, you get hit for 3 hours (assuming single instance on a small VM).
  • A “VM size factor” is used to multiply the hours for the larger VMs.  A small instance (counts as 1 core) is a value of 1, medium 2, large 4, extra large 8.

To summarize the compute hours – here is the formula:

# of instances (each web and worker role instance counts individually, for both staging and production) * VM size factor * number of partial clock hours

Rule #2 – Turn off diagnostics before you publish to Azure

This one is really buried.  There is a setting in the properties for each Azure role that allows you to turn diagnostics on or off – it’s ON by default.  Diagnostics are captured by Azure on the local VM, and then persisted to YOUR storage account VERY FREQUENTLY if you have this turned on.  The problem is, each time the logs are saved to your storage account, you get hit with storage transactions.  In those first 5 days of my Hello World app, we were averaging over 7000 transactions per day.  When you only get 50,000 transactions in the trial period, it’s easy to see how you can go over.  The MS support engineer told me the default is that the logs are persisted EVERY MILLISECOND.  I have trouble believing that, but that’s what he said.  It’s “really fast” whatever it is.

Turn OFF diagnostics unless you need them, and if you need them, throttle down the period after which they’re persisted to storage.

Right click, properties on each role in your Azure project.  In Configuration, clear the “Enable Diagnostics” checkbox.

CropperCapture80CropperCapture50

If you DO need logging enabled, there are ways to control exactly what data gets logged and the period at which it’s written to your storage account.  You can either use code in your OnStart method, or place a configuration file in your storage account to control the settings.  I plan on writing a post about the configuration based method soon.

Bottom Line

I think the bottom line with all this Azure billing – watch your bill (you can view it online through the portal) like a hawk, even if you have a simple app and are just in the “free” trial period.  Always know what unit of measure you’re dealing with:  human minutes, or Azure hours.  :)

Kinect in Windows 8 on .NET Rocks

My coworker Danny Warren and I recorded a .NET Rocks session a couple weeks that just went live tonight.  We discuss how we got a Windows 8 / WinRT application to communicate with the Microsoft Kinect.  I blogged about how we pulled that off here, but check out the podcast to hear it first hand.

.NET Rocks show #714 – Dan Hanan and Danny Warren Mix Kinect and Metro

Snoop 2.7.0

This post is long overdue. I have written about Snoop for WPF a couple times, and it’s been a while since I gave an update.

Back in September, we published v2.7.0 (release announcement). One of the coolest new features is the ability to drag and drop a cross-hair to the app you want to snoop. You no longer have to wait for the list of applications to refresh.  We added a DataContext tab and various bug fixes.

CropperCapture35

I’ve been spending a bit of my RECESS time lately getting back into contributing to the code base. We have some really cool features in the works — I’ll post again when the next version hits the streets.

ASP.NET Membership Schema Created Automatically

UPDATE: A few weeks after posting this, I learned more about what’s going on with the membership providers. See the update here.

It’s been a few years since I’ve done any ASP.NET work. Recently, I fired up Visual Studio 2010 and created an ASP.NET MVC project that uses membership.  I remembered that I needed the table schema to support the membership functionality, which you can create by running aspnet_regsql.exe.  This tool shows a wizard that allows you to add or remove the database objects for membership, profiles, role management, and personalization.

aspnet_regsql

Here is the contents of the MembershipTest database after running the tool.  Notice the tables are prefixed with “aspnet_” and there are views and stored procedures to support the functionality.

dbschema-tool

Now we just edit web.config to set where the membership database is located (MembershipTest).

  <connectionStrings>
    <add name="ApplicationServices" connectionString="data source=localhost;Initial Catalog=MembershipTest;
        Integrated Security=SSPI;MultipleActiveResultSets=True"
      providerName="System.Data.SqlClient" />
    <add name="DefaultConnection" connectionString="data source=localhost;Initial Catalog=MembershipTest;
        Integrated Security=SSPI;MultipleActiveResultSets=True"
      providerName="System.Data.SqlClient" />
  </connectionStrings>

We haven’t written any code – we just have the MVC project that was created from the VS project template.  Run it, and register a new user, thus exercising the membership logic to create a new user in the database.  Check out the database schema. There are a handful of new tables listed, those WITHOUT the “aspnet_” prefix.

dbschema-run

When we look in the aspnet_users table (which was created by the aspnet_regsql tool), our user is not there.  Look in the Users table, and it IS there.  What’s going on here?  From what I can tell, the objects created by the aspnet_regsql tool ARE NOT USED by the latest SqlMembershipProvider and DefaultMembershipProvider.   So who is creating the objects required (those without the “aspnet_” prefix)?

So far, I haven’t found any documentation on this, but Reflector is our friend.  Looking through the provider assemblies, I find the code that is creating the database and schema at run time!

In the MVC project AccountController, we call Membership.CreateUser in the Register method.  Let’s find that method using Reflector.  Look up DefaultMembershipProvider.CreateUser, which calls the private Membership_CreateUser.  At the very top of that method, it calls ModelHelper.CreateMembershipEntities( ).

Method1

Follow that call chain down, and eventually you get to see the first part of the schema creation – the actual database.  After that, it goes through a whole bunch of code to generate the objects themselves.

Method2

So just to prove it to myself, I deleted the MembershipTest database, and ran my project again (same connection string in config, pointing to the non-existent MembershipTest database).  Run the project, create a user.  Sure enough, the database is created, the required objects are there (and we never ran the aspnet_regsql tool).  It seems that the newest providers don’t need any of the old Views or Stored Procedures either. None are created.

dbschema-clean

UPDATE: A few weeks after posting this, I learned more about what’s going on with the membership providers. See the update here.

The only thing I can come up with is that the ASPNET_REGSQL tool is obsolete.  Maybe with the advent of the Entity Framework Code-First technology, Microsoft took the attitude that they’ll create the DB objects in code if they’re not already there.

Note: it turns out that you don’t even have to run the web site project and register a new user.  You can use the ASP.NET Configuration tool (Project menu in VS) and create a user there.  It uses the same provider configuration, so that will also create the required database schema.

Even better – I ran this same test against SQL Azure, and it works fine as well.  Creates the database and objects, and membership works just fine (UPDATE: …but running the site IN Azure blows up, since the provider is not installed there yet. See follow-up post.)

 
 

Using Kinect in a Windows 8 / Metro App

We have been working with the Kinect for a while now, writing various apps that let you manipulate the UI of a Windows app while standing a few feet away from the computer – the “10 foot interface” as they call it.  Very cool stuff.  These apps make use of the Microsoft Kinect for Windows SDK to capture the data coming from the Kinect and translate it into types we can use in our apps:  depth data, RGB image data, and skeleton points.  Almost all of these apps are written in C# / WPF and run on Windows 7.

Last month a few of us went to the Microsoft //BUILD/ conference, and came back to start writing apps for the new Windows 8 Metro world.  Then naturally, we wanted to combine the two and have an app that uses Kinect in Windows 8 Metro.  At InterKnowlogy we have a “Kiosk Framework” that fetches content (images, audio, video) from a backend (SQL Server, SharePoint) and has a client for various form factors (Win7, Surface, Win Phone 7) that displays the content in an easy-to-navigate UI.  Let’s use the Kinect to hover a hand around the UI and push navigation buttons!  Here’s where the story begins.

One of the First Metro Apps to Use Kinect

Applications that are written to run in Windows 8 Metro are built against the new Windows Runtime (WinRT) API, which is the replacement for the old Win32 API that we’ve used since Windows 3.0.  The problem when it comes to existing code is that assemblies written in .NET are not runtime compatible with WinRT (which is native code).  There is a lot of equivalent functionality in WinRT, but you have to port existing source code over, make changes where necessary, and compile specifically against WinRT.  Since the Kinect SDK is a set of .NET assemblies, you can’t just reference it in your WinRT / Metro app and start partying with the Kinect API.  So we had to come up with some other way…

You CAN write a .NET 4.5 application in Windows 8, using Visual Studio 11 and it will run on the “desktop” side of the fence (alternate environment from the Metro UI, used for running legacy apps).  So we decided to take advantage of this and write a “Service” UI that will run in the classic desktop environment, connect to the Kinect and receive all the data from it, and then furnish that data out to a client running in the Metro side.  The next issue was – how to get the data over to our Kiosk app running in Metro?  Enter web sockets.  There is a native implementation of web sockets in the WinRT framework and we can use that to communicate on a socket channel over to the .NET 4.5 desktop which can reply to the client (Metro) socket with the Kinect data.

Some Bumps in the Road

Writing the socket implementation was not conceptually difficult.  We just want the client to poll at a given frame rate, asking for data, and the service will return simple Kinect skeleton right-hand position data.  We want to open the socket, push a “request” message across to the service, and the service will write binary data (a few doubles) back to the caller.  When pushing bytes across a raw socket, obviously the way you write and read the data on each side must match.  The first problem we ran into was that the BinaryWriter in the .NET 4.5 framework was writing data differently than the DataReader in WinRT was receiving the data.

As with any pre-release software from MIS, there is hardly any documentation on any of these APIs.  Through a ton of trial and error, I found that I had to set the Unicode and Byte Order settings on each side to something that would match. Note the highlighted lines in the following code snippets.

    // Send data from the service side

    using ( MemoryStream ms = new MemoryStream() )
    {
        using ( BinaryWriter sw = new BinaryWriter( ms, new UnicodeEncoding() ) )
        {
            lock ( _avatarPositionLock )
            {
                sw.Write( _lastRightHandPosition.TrackingState );
                sw.Write( _lastRightHandPosition.X );
                sw.Write( _lastRightHandPosition.Y );
            }

        }

        Send( ms.GetBuffer() );
    }
    // Receive data in the client 

    DataReader rdr = e.GetDataReader();

    // bytes based response.  3 ints in a row
    rdr.UnicodeEncoding = UnicodeEncoding.Utf16LE;
    rdr.ByteOrder = ByteOrder.LittleEndian;
    byte[] bytes = new byte[rdr.UnconsumedBufferLength];

    var data = new JointPositionData();
    var state = rdr.ReadInt16();
    Enum.TryParse&lt;JointTrackingState&gt;( state.ToString(), out data.TrackingState );
    data.X = rdr.ReadDouble();
    data.Y = rdr.ReadDouble();

    UpdatePositionData( data );

Once I got the socket channel communicating simple data successfully, we were off and running.  We built a control called HoverButton that just checks whether the Kinect position data is within its bounds, and if so, starts an animation to show the user they’re over the button.  If they hover long enough, we fire the Command on the button.

The next problem was connectivity from the client to “localhost” which is where the service is running (just over in the desktop environment).  Localhost is a valid address, but I would keep getting refused connections.  Finally re-read the setup instructions for the “Dot Hunter” Win8 SDK sample which tells about a special permission that’s required for a Win8 app to connect to localhost.

PackageFamilyName

Open a command prompt as administrator and enter the following command (substitute your package name for the last param):

    CheckNetIsolation LoopbackExempt -a -n=interknowlogy.kiosk.win8_r825ekt7h4z5c

There is no indication that it worked – I assume silence is golden here.  (I still can’t find a way to list all the packages that have been given this right, in case you ever wanted to revoke it.)

CheckNetIsolation

Finally, a couple other minor gotchas:  the service UI has to be running as administrator (to open a socket on the machine), and the Windows Firewall must be turned OFF.  Now we have connectivity!

What’s Next?

Beyond those two problems, the rest was pretty straight forward.  We’re now fiddling with various performance settings to achieve the best experience possible.  The skeleton data is available from the Kinect on the desktop side at only about 10 frames per second. We think this lower rate is mostly due to the slower hardware in the Samsung Developer Preview device we got from BUILD.  Given that speed, the Metro client is currently asking for Kinect data from the service at 15 frames per second.  We are also working on better smoothing algorithms to prevent a choppy experience when moving the hand cursor around the Metro UI.

Crazy WinRT XAML Bug in Windows 8 Developer Preview

I’ve been playing with the developer preview release of Windows 8 that we got from //BUILD/.  Everyday, we find more and more “issues” that are either just straight up bugs, things that are not implemented yet, or maybe even issues that won’t be resolved because WinRT is “just different”.  This one was especially “fun” to track down – and when we did, we couldn’t believe the issue.

Start with a simple WinRT application in Visual Studio 11.  Add a class library project that will hold a couple UserControls that we’ll use in the main application.  Pretty standard stuff.

Create 2 UserControls in the library, the first one will use the other.  UserControl1 can just have some container (Grid) then use UserControl2 (which just has an ellipse).  In the main application, add a reference to the ClassLibrary and use UserControl1 in the MainPage.  Your solution should look something like this:

Solution

  <!-- MainPage.xaml -->
  ...
  xmlns:controls="using:ClassLibrary1.Controls"
  ...

<Grid x:Name="LayoutRoot"
    Background="#FF0C0C0C">
  
  <controls:UserControl1 />
  
</Grid>
  <!-- UserControl1.xaml -->
  ...
  xmlns:controls="using:ClassLibrary1.Controls"
  ...

  <Grid x:Name="LayoutRoot"
      Background="#FF0C0C0C">
    
    <controls:UserControl2 />

  </Grid>
  <!-- UserControl2.xaml -->

  <Ellipse Height="40" Width="40" Fill="Red" />

Now here’s the key step.  Add a new class to the ClassLibrary, say Class1, and have that class IMPLEMENT INotifyPropertyChanged (be sure to choose the right using statement – the one from UI.Xaml.Data).

    public class Class1 : INotifyPropertyChanged
    {
        #region INotifyPropertyChanged Members

        public event PropertyChangedEventHandler PropertyChanged;

        #endregion
    }

That’s it.  Run it.

You will get a runtime exception:

Exception

Crazy, huh?  The Microsoft guys have confirmed this to be a known bug.  I just can’t believe what that bug would be?  How is a simple UNREFERENCED class that implements INPC getting in the way of a couple UserControls?

Ahh … the joys of coding against pre-release software…

Windows 8 Development–First Impressions

Along with 5000 other developers, I attended //BUILD/ in September and came home with the Samsung pre-release hardware loaded with Visual Studio 11 and the Win8 bits.  Since then, I’ve been writing a few apps, getting my feet wet writing code for Metro / WinRT apps in C#.  Here are some early impressions on the Windows 8 operating system in general, and then some thoughts on what it takes to develop Metro apps.

Don’t Make Me Hop That Fence Again

The grand new look of Windows 8 is the “Metro” style interface that you see right away on the Start Screen.  The traditional start menu is gone, you don’t have a hierarchy of StartScreenstart menu icons that you wade through, and you don’t have a bunch of icons scattered on your desktop.  Instead you have the clean looking, larger sized “tiles”, some of which are “live”, showing you changes to application data in real-time.    You can find lots of info on the Win8 Metro interface here, so I won’t talk about that.  Apps written for Metro, built against the Windows Runtime (WinRT) run over here on this “Metro” side of the fence.

What is so interesting to me though is that there exists an alternate world in Windows 8 – the classic desktop.  This is where legacy apps run, where the familiar desktop can be littered with icons, the taskbar shows you what’s running,etc. Everything from old C and C++ code to .NET 4 apps run over here on this “Desktop” side of the fence.

DesktopSideSo here then is the problem. Throughout the day, I run a few apps to check Facebook and Twitter (Metro), then I startup Visual Studio 11 (desktop), then I start Task Manager (Metro), then maybe a command prompt (desktop).  Each time I’m in one environment and run an app that runs in the other, the OS switches context to the other side of the fence.  This becomes SUPER ANNOYING over the course of a day.  I’m in the desktop, all I want to do is fire up a command prompt.  Click the start menu (which takes me to the Metro start screen), then choose command prompt, which switches me back to where I just came from and fires up the command prompt.  I’ve become accustomed to pinning all my necessary apps to the desktop side taskbar so I don’t have to go to the Metro screen to run time.

Enough of the Rant – What About Development?

The Windows 8 Runtime (WinRT) is a completely re-written layer akin to the old Win32 API that provides lots of new services to the application developer.  When sitting down to write a new app, you have to decide right away, are you writing a .NET 4.5 app, or are you writing a Metro app.  (This dictates which side of the fence you’ll be hanging out in.)  Some of the coolest features of the Win8 runtime are:

Contracts

Contracts are a set of capabilities that you declare your application to have, and the OS will communicate with your app at runtime based on the contracts.  Technically they’re like interfaces, where you SearchResultspromise to implement a set of functionality so the OS can call you when it needs to.  The most popular contracts are those that are supported by the “Charms” – Search, Share, and Settings.  These are super cool.  Implement a couple methods and your app participates as a search target, so the when the user enters some text in the search charm, your app is listed and you furnish the results.

 

 

protected override void OnLaunched( LaunchActivatedEventArgs args )
{
    var pane = Windows.ApplicationModel.Search.SearchPane.GetForCurrentView();
    pane.QuerySubmitted += QuerySubmittedHandler;

    pane.SuggestionsRequested += SuggestionsRequestedHandler;
    pane.ResultSuggestionChosen += ResultSuggestionChosenHandler;

    pane.PlaceholderText = "Search for something";

	// ...
}

private void QuerySubmittedHandler( SearchPane sender, SearchPaneQuerySubmittedEventArgs args )
{
    var searchResultsPage = new SearchTest.SearchResultsPage1();
    searchResultsPage.Activate( args.QueryText );
}

private void SuggestionsRequestedHandler( SearchPane sender, SearchPaneSuggestionsRequestedEventArgs args )
{
    var searchTerm = args.QueryText;

    var sugg = args.Request.SearchSuggestionCollection;
    for ( int i = 0; i < 5; i++ )
    {
        // just faking some results
        // here you would query your DB or service
        sugg.AppendResultSuggestion( 
            searchTerm + i, 
            "optional description " + i, 
            i.ToString(), 
            Windows.Storage.Streams.StreamReference
              .CreateFromUri( new Uri( "someurl.jpg" ) ),
            "alternate text " + i );
    }

    //defer.Complete();
}

protected override void OnSearchActivated( SearchActivatedEventArgs args )
{
    var searchResultsPage = new SearchTest.SearchResultsPage1();
    searchResultsPage.Activate( args.QueryText );
}

ShareSharing is just about as easy. Declare the contract in your manifest and your app is a share target.  When the user chooses to share to your app, the OS sends you the data the user is sharing and you party on it.

protected override void OnSharingTargetActivated( ShareTargetActivatedEventArgs args )
{
	var shareTargetPage = new ShareTestTarget.SharingPage1();
	shareTargetPage.Activate( args );
}

Settings leave a little bit to be desired in my opinion. You declare the contract, and add a “command” to tell the OS to put in the settings charm panel.  But when the user chooses that command (button), your app has to show its own settings content (usually in a slide out panel from the right).  This is a 4 step process to make any settings changes in your app (swipe from right, touch the button, change settings, dismiss the panel). I really hope in future versions they figure out a way to embed our application settings content right there in the settings panel, not in our own app.

    Windows.UI.ViewManagement.ApplicationLayout.GetForCurrentView().LayoutChanged += LayoutChangedHandler;
    DisplayProperties.OrientationChanged += OrientationChangedHandler;
    Application.Current.Exiting += ExitingHandler;

    SettingsPane pane = SettingsPane.GetForCurrentView();
    SettingsCommand cmd = new SettingsCommand("1", "Pit Boss Settings", (a) =>
        {
            ShowSettingsPanel();
        });
    pane.ApplicationCommands.Add(cmd);

Async Everywhere

The WinRT team has built the APIs with asynchrony in mind from day one. At //BUILD/, we heard over and over that your UI should NEVER freeze because it’s busy doing something in the background.  To that end, any API that could potentially take longer than 50 milliseconds is implemented as an asynchronous method.  Using the async and await keywords (we’ve seen these in a CTP for .NET 4) that are built into the WinRT-based languages, we can write our code in a flowing, linear fashion that makes sense semantically.  No longer do we have to use BackgroundWorkers and worry about marshalling results back to the calling thread.

    HttpClient http = new HttpClient();
    var task = await http.GetAsync( url );

    string responseText = task.Content.ReadAsString();

“Free” Animations

XAML support is implemented in WinRT as native code, and is available to use from C#, VB, and C++.  One of the coolest features they added to what we’re already used to in WPF & Silverlight is the “built-in” animation library.  Simply add any number of transitions on a container element, and any of the associated operations in ANY OF ITS CHILDREN will use that transition.  The list of built-in transitions include: Entrance (content appearing for the first time), Content (change content from old to new), Reposition (fluidly move elements when they get repositioned), Add/Delete (fade out/in and move existing items), and Reorder.

  <Grid>
    <Grid.ChildTransitions>
      <TransitionCollection>
        <EntranceThemeTransition
            HorizontalOffset="500" />
      </TransitionCollection>
    </Grid.ChildTransitions>
  </Grid>

Well, I’ve jumped all around in this post, and it’s long enough already.  Look for future posts with more specifics about features and code that we’re working into our apps.

Some Development Gotchas (aka Bugs)

Here’s a quick list of some issues we’ve run into these first few weeks.

  • You can only set DataContext in code-behind – future releases will allow setting in XAML
  • There are a handful of bugs in FlipView.SelectedItem.  If you’re binding to the SelectedItem property, you have to manually set the SelectedItem to null and then back to the correct property (get it from your ViewModel) in code-behind.  This will obviously be fixed in a future release.
  • You must use ObservableVector<object> based properties in your VMs if you are binding to ItemsControls.ItemsSource and want the UI to update when there are changes to the collection.  ObservableVector<T> does not exist in the class library – you can find source code for it in many of the SDK samples.
  • Use the correct INotifyPropertyChanged implementation (from Windows.UI.Xaml.Data, not System.ComponentModel)
  • There is no VS item template for custom controls in C#, so VS won’t make you the Themes\Generic.xaml structure.  We’re still tracking this one down. We’ve used a Style resource to describe the template for a Button-derived custom control and it works, but you don’t get access to the Template property in OnApplyTemplate, so we’ve resorted to using VisualStateManager instead of referring to template parts and changing UI that way.
  • I’m sure there are more to come … in fact, check out my co-worker Danny’s post (should be up in the next couple days) with more details on what we’ve been encountering while writing our first few Win8 apps.

Check out the Win8 Metro developer forums for ongoing discussions of many other issues.