About the Author

Dan Hanan is a lead software engineer at InterKnowlogy, where he works on a wide variety of software projects related to the Microsoft tech stack. At the center of his universe are .NET, C#, XAML-based technologies such as WPF, Silverlight, Win Phone 7, and Surface, as well as non-UI related bits involving WCF, LINQ, and SQL. In his spare tech-time, he is venturing outside the MS world into the depths of the Android OS. Come back once in a while to check out Dan's random collection of technology thoughts....better yet, subscribe to the RSS feed!

Using PhoneGap to Access Native iPad APIs

I’m working on a project that required an iPad “application”, which we ended up writing as an HTML site that is accessed in Safari.  Later, that same app had a requirement to access the camera roll on an iPad.  Thanks to Chris Rudy here at IK for introducing us to PhoneGap – a way to wrap your HTML web page(s) in a framework that is then compiled into a native application for the OS you’re targeting (iPad, Android, WinPhone7, etc).  Chris blogged about it here and here.

Unexpected Error

The PhoneGap framework exposes a handful of APIs to access the native hardware on the device (such as camera, accelerometer, compass, contacts, etc).  This all sounded great.  I sat down to write the code, following the examples in the API docs.  I could access the camera roll no problem – the user is shown the photos, they choose one, and you wake up in an event handler in your code.  Next I would try to post that image to a simple REST API running on my Windows machine. No matter what I did, I would get an “Unexpected Error” from the post.  I tried the PhoneGap FileTransfer API and then some more low level AJAX post methods.  All resulted in errors.

I let the code sit for a week until the next RECESS, when I dug a little deeper.  I finally found that I was running into a KNOWN BUG in the Camera API, and that it was fixed and released THAT DAY.  So now with PhoneGap version 1.6.1, the Camera.getPicture( ) method will properly return the BYTES of the image chosen from the camera roll instead of the URL to the local file.  These base64-encoded bytes are obviously what I want to post to my web server.  The code as posted everywhere around the web now works fine (notice I gave up on the FileTransfer object and just post the bytes using AJAX):

function browseCameraRoll() 
{
	navigator.camera.getPicture( onPhotoLoadSuccess, onFail,
		{
		quality: 50,
		destinationType: Camera.DestinationType.DATA_URL,
		sourceType: Camera.PictureSourceType.PHOTOLIBRARY,
		encodingType: Camera.EncodingType.PNG,
		} );
}

function onPhotoLoadSuccess( imageData )
{
	var url = 'http://myserver.com/UploadImage2';
	var params = { base64Image: imageData };

	$.ajax({
		   type: "POST",
		   url: url,
		   data: params,
		   success: function (returndata) 
		   {
			//alert( 'back from POST: ' + returndata.Status ); 
			grayscaleImage.src = "data:image/png;base64," + returndata.GrayscaleVersion
		   }} );
}

File Access APIs

Today I continued by learning the file access APIs.  I simply want to write a configuration file in isolated storage the first time the application runs, and then read it on each subsequent startup.  This is super simple, and from what I can tell, does not even require using PhoneGap.  The HTML5 File System APIs can be used to read and write files, create directories, etc. Here is a good write-up on the available APIs.  I thought since the FileWriter and FileReader objects are listed in Cordova’s PhoneGap API documentation, that I was getting an instance of the file through the HTML5 APIs, and then using PhoneGap APIs to read and write the file.  This doesn’t seem to be the case.  FileWriter and FileReader are HTML5 APIs.  Still a bit confused on why Cordova claims them as theirs (assuming just for the convenience of having all the docs in one place).

In any case, file access was a piece of cake – just followed the example here.

 


 

Installing Win8 Consumer Preview on the Samsung 700T

Since September, I’ve been using the Developer Preview version of Windows 8 that came with the Samsung 700T tablets that we got at //BUILD/, but today Microsoft released the next update to Win8 – the Consumer Preview.  Obviously, it came out today, so it’s time to update the device!

I couldn’t find any documentation showing how to upgrade / install the new OS, and heard that upgrades are not supported in these pre-release bits, so I was ready to wipe the device and install clean.  Through trial and error, here’s how I installed the Consumer Preview version.

The plan was to boot from a USB thumb drive that has the contents of the installation media on it.   You’ll see below that it wasn’t the normal “boot from ISO” experience that I’m used to on a desktop PC.

Create Bootable USB Thumb Drive

Most importantly, you can only use a thumb drive that is 4 GB in size. The hardware does not recognize larger drives than that.

Use the Windows 7 USB/DVD download tool to create a bootable USB drive from the .ISO file.  I think this is similar to expanding the contents of the .ISO onto the USB drive, but then it also makes it bootable as well.  As you’ll see below, I didn’t end up actually BOOTING from the USB drive, so making it bootable is probably an extraneous step.

Insert the USB thumb drive into the device.

Enter “BIOS” at Startup

This was new to me.  When the device is powering up, hold down the START button until you see the “Preparing Options” message at the bottom.  IMG_20120229_141943This brings you to what seems to be the equivalent of BIOS options, but with a Win8 layer on top of it running the show (touch works in this UI).

(Excuse the nasty camera pics here – couldn’t take actual screenshots when down in the low level BIOS options.)

Choose TROUBLESHOOT.

Choose ADVANCED OPTIONS.

Choose COMMAND PROMPT.

At this point you will be prompted for account credentials.  This is another area new to me – requiring account credentials on the device at this “low level” before you can go hacking on the system.

IMG_20120229_142049

Command Prompt – Run SETUP

Now you will be at a familiar command prompt, at the X drive (not sure what drive this is).

CD to the D drive, which on my device is the USB thumb drive.  Notice the directory listing is the Win8 Consumer Preview installation software from the ISO.

IMG_20120229_142149

Now from the root of the D drive, run SETUP.  Up comes the first step of the installation…

IMG_20120229_142206

From here, after a couple intro steps, I chose “Custom” installation since I want to wipe my device, not an upgrade, and had to choose a partition…

Crazy List of Partitions

It turns out there are 5 partitions on my device.  The main one where Win8 Developer Preview is installed is Partition 5.  IMG_20120229_142506I deleted that one (drive options, advanced) and installed to that partition.

Finally … we’re on to the “normal” installation of Windows 8…

IMG_20120229_143027

Windows 8 Consumer Preview

Ahh, the start screen…

Tablet Win8 CP - Start Screen

Microsoft Surface – Installing an App on V2

Previously, I wrote about porting my Surface Craps application from Surface v1 to v2.  Now that the app has been upgraded, here’s a short post on how to update the installer to work with v2.

There is not much different about the makeup of an application that is registered on the Surface between v1 and v2.  Your program binaries go in a directory, and then (new to v2) a SHORTCUT points the Surface shell to that directory.

Program Files

Unlike v1 that required you to install your program files in a specific “Program Data” sub directory, now you can install your program files to a directory of your choice – usually %ProgramFiles(x86)%.

CropperCapture113

Program Data Shortcut

Next, make your installer create a SHORTCUT to the directory where you install your program files, and place the shortcut in the Surface v2 directory:  C:\ProgramData\Microsoft\Surface\v2.0\Programs.

It’s important that this shortcut is to the DIRECTORY where your program files are, not a shortcut directly to the application .xml file.

Here’s how the Surface programs directory and shortcut properties should look.

CropperCapture115

That’s it – you should now see your application in the chooser when you run the shell.

Microsoft Surface – Porting from v1 to v2

A couple years ago, I wrote Surface Craps during RECESS to explore the Microsoft Surface table and APIs.  Now that Surface 2 is out, it’s time to upgrade the software to run on the new hardware.  This is the first in a series of blog posts about the process in upgrading from Surface v1 to v2.

Obviously this first requires that you have the Surface 2 SDK installed.  To start, I branched the project tree in TFS to make a v2 copy.

Open the project in VS 2010.  Update the project references.  The new referenced assemblies are in C:\Program Files (x86)\Microsoft SDKs\Surface\v2.0\Reference Assemblies.

  • Microsoft.Surface.dll
  • Microsoft.Surface.Presentation.dll
  • Microsoft.Surface.Presentation.Generic.dll

I was using Visual State Manager from the WPF Toolkit, and now it’s included in WPF 4, so I removed that reference as well.

Now we start the brute force process of fixing code that doesn’t compile.  Here’s a list of what I found.  Many of these changes are moving to WPF 4 in general, since it’s where we get most of the touch functionality in the Surface 2 environment.

Control Name Changes

Surface v1 Surface v2
SurfaceUserControl UserControl
SurfaceContentControl ContentControl

Event Handling Changes

Surface v1 Surface v2
ContactDown event use TouchDown
(Preview)ContactUp/Down use (Preview)TouchUp/Down
ContactDown event use TouchDown
(Preview)ContactUp/Down (Preview)TouchUp/Down
ContactChanged TouchMove
ContactEventHandler EventHandler<TouchEventArgs>
ApplicationActivated, etc. OnWindowInteractive, Noninteractive, Unavailable (see a default new Surface 2 project for an example of these event handlers)
ApplicationLauncher.Orientation ApplicationServices.InitialOrientation

Property / Method Changes

Surface v1 Surface v2
e.Contact.IsTagRecognized e.TouchDevice.GetIsTagRecognized( ). (add using statement for Microsoft.Surface.Presentation.Input to get the extension method)
e.Contact.Tag.Byte.Value e.TouchDevice.GetTagData().Series & .Value
e.Contact.GetPosition( ) e.TouchDevice.GetPosition( )
e.Contact.GetOrientation( ) e.TouchDevice.GetOrientation( )
ScatterViewItem.IsActive ScatterViewItem.IsContainerActive

Manipulation Processing

In Surface v1, you would use the Affine2DManipulationProcessor to handle gestures such as flick, rotation, scale, etc.  In v2, you just use the manipulation processing that’s provided by the WPF 4 UIElement.

Surface v1 Surface v2
On any elements that you want to track manipulations on, set IsManipuliationEnabled = true
BeginTrack( ) nothing to do in v2
Affine2DManipulationCompleted, … events UIElement.ManipulationCompleted, … events
e.Velocity.Length e.FinalVelocities.LinearVelocity.Length
e.TotalTranslation.X e.TotalManipulation.Translation.X

Resolution

That’s all I had to as far as API differences.  Next came the resolution differences. The Surface v2 display runs at 1920 x 1080, so if you have any UI that does not stretch or any hardcoded coordinate-based math in your software, it will have to be updated.

Tags

Surface v2 does not support Identity tags, but it does support Byte tags.  See the tag-related methods above for the API differences.

For Surface Craps specifically, I ran into a problem with the transparent dice we got from Microsoft for Surface v1. The dice have byte tags on them that are practically transparent, but have enough IR reflectivity to be picked up by the Surface v1 infrared cameras.  In Surface v2, the tags are recognized by interpreting the contact information using PixelSense technology, and something in that processing is not recognizing the mostly transparent tags on the dice.  I will continue to investigate this issue and write another post if I have any update.  For now – it’s a bummer that the physical dice do not work.

Installation

In a future post I will talk about the differences in what it takes to install your application in Surface v2…

 
 
 

The Real-Time Web With SignalR

Today for RECESS I looked into SignalR.  The SignalR site (which currently is just the github repository) describes it as an “async signaling library for ASP.NET to help build real-time, multi-user interactive web applications.”  What does this mean to you and me?  This library allows you, in just a few lines of code, communicate in real-time amongst browsers that are hitting your site.

Scott Hanselman does a good job of summarizing what SignalR does and how we got here, so I won’t repeat it here.

I followed the super quick sample – a browser based chat application as described Scott’s post.  I ran out of time before I could get the lower level connection working, but the higher level “Hub” based connection works great.

It’s super cool to see it working – I have 3 different browsers (IE, Chrome, Firefox) in the chat together.  When I enter a chat message in one, the other two receive the message instantaneously.

    $("#broadcast").click(function () {
      // send() is the method on the ASP.NET Hub class running on the server
      chat.send($('#msg').val());
    });

The use of dynamic objects in the ASP.NET implementation of the Hub allows me to call any method on the Hub, as long as that function exists in the client side JavaScript.

    public class Chat : Hub
    {
        public void Send( string message )
        {
            // addMessage is a client side javascript function
            Clients.addMessage(message);  
        }
    }

Here’s a couple shots of IE Developer Tools (F12) when running the chat client.

Waiting for a response on the first call to signalr/connect:

CropperCapture81

Got the response from the first connect (a message was broadcast to all the clients), now turn right around and connect to wait for the next “reply” from the server:

CropperCapture82

 


 

Azure 101 – Diagnostics via Configuration

In a previous post I talked about how diagnostics by default are turned ON for Azure roles, and that you should turn them OFF if you don’t want to incur a ton of Azure storage transaction charges.  I finally spent some time diving into the various configuration settings and now understand how to leave diagnostics ON, and adjust their configuration throughout the lifetime of the running Azure instances.  This way, you don’t persist any log messages during “normal” operation, but can then ratchet up the settings to debug an issue, then turn them back off.

An alternative to using the diagnostic APIs in your role’s OnStart method and thereby hardcoding the settings is to use a configuration file that the Azure diagnostics runtime will interrogate on a given interval while your instance is running.  If you make a change to the config, the settings take effect the next time the runtime checks the config file.  There is a lot of info about how to author the diagnostics.wadcfg file and where to put it so that it gets deployed correctly. Here is a lot more info, including the order in which config settings are found. What I could not find however, was information about where the config file gets deployed, and how to change it at runtime.

First off, contrary to what I told you in the previous post, you need to start with diagnostics turned ON.  This deploys a default diagnostics config file to a container in your Azure storage account called wad-control-container/(deploymentId)/instanceRoleFile (example shown below is for a local compute instance – when deployed to Azure, there will be GUID based folders).  By default, the various diagnostic sections in the config file will each have a default transfer period of 0, which (I think) means “don’t persist these diagnostics (logs, crash dumps, event logs, etc) in my storage account”.  If you follow the links above to create a diagnostics.wadcfg file, then any sections in that file override the defaults when your role is deployed.  Additionally, if you have any code in your OnStart method that changes diagnostics settings, they will also be reflected in the runtime config file.  Basically the resulting config file deployed to storage is the merged result of configs and code-based settings you have in your role.

With a source diagnostics.wadcfg file as shown here (contains only a single Logs element)

CropperCapture75

the resulting configuration file deployed to my storage account looks like this (shown here using Azure Storage Explorer). Notice my Logs section got merged with the other default sections (with transfer periods of 0):

CropperCapture76

In my sample app, I have a link that causes my controller to write a couple Trace.TraceXXX messages (one warning and one informational).  With the setup above, I click on the link, causing a few trace messages, and after a minute passes by, I check the WADLogsTable in table storage and see my trace messages (the filter is set to Verbose, so I see both Warning and Informational messages).

CropperCapture77

Now I can upload a new version of the configuration file (keeping the name the same), this time it has the Logs section filter set to Warning.

CropperCapture79

Cause a few more of the same trace messages to get created, wait a minute, and check the WADLogsTable.  This time we only see warnings (the “informational” messages have been filtered out based on our uploaded config file settings).

CropperCapture78

To summarize, this is probably the best way to configure diagnostics since it’s outside of your application code.  You can upload a new configuration file any time and the runtime will adjust to your new settings.  (I should mention that the runtime will poll for configuration changes at an interval based on the value you set in the configuration file).

 
 
 

ASP.NET Universal Providers

Earlier this month, I posted about how the ASP.NET membership providers are creating the required database schema for me automagically when I first hit the site.  Here is a quick update to that statement now that I more thoroughly understand what’s going on.

Scott Hanselman does a great job introducing us to the ASP.NET Universal Providers for Session, Membership, Roles and User Profile so I won’t repeat it here.  What I DON’T get from his article is that the Universal Providers are the default for a new ASP.NET MVC3 project (and that they’re not yet supported on Azure).  I hadn’t touched ASP.NET or MVC for multiple years, so I just went along quietly and created a new project, pointed my connection string at a SQL Server and things were all working.  It wasn’t until I published the site to Azure that things fell apart.

The Universal Providers (“DefaultMembershipProvider”) are referenced throughout the web.config for all the different pieces of membership, and here you see it set as the default provider (the one that membership code in the site will look for and use).

  <membership defaultProvider="DefaultMembershipProvider">
    <providers>
      <clear />
      <add name="AspNetSqlMembershipProvider"
            type="System.Web.Security.SqlMembershipProvider"
            connectionStringName="ApplicationServices"
            enablePasswordRetrieval="false"
            enablePasswordReset="true"
            requiresQuestionAndAnswer="false"
            requiresUniqueEmail="false"
            maxInvalidPasswordAttempts="5"
            minRequiredPasswordLength="6"
            minRequiredNonalphanumericCharacters="0"
            passwordAttemptWindow="10"
            applicationName="/" />
      <add name="DefaultMembershipProvider"
            type="System.Web.Providers.DefaultMembershipProvider, 
                   System.Web.Providers, Version=1.0.0.0, Culture=neutral,
                   PublicKeyToken=31bf3856ad364e35"
            connectionStringName="DefaultConnection"
            enablePasswordRetrieval="false"
            enablePasswordReset="true"
            requiresQuestionAndAnswer="false"
            requiresUniqueEmail="false"
            maxInvalidPasswordAttempts="5"
            minRequiredPasswordLength="6"
            minRequiredNonalphanumericCharacters="0"
            passwordAttemptWindow="10"
            applicationName="/" />
    </providers>
  </membership>

This works fine as long as we were developing on our local machines (where the Universal Providers are installed), and even when we’re hitting the SQL Azure database from our local machine (the provider is local, the database is remote – so the code can create the SQL Azure compatible schema on the fly regardless of the DB location).  When the site is deployed to Azure, where the Universal Providers are not installed, you get an error: Unable to find the requested .Net Framework Data Provider. It may not be installed.

It took forever to figure out that this was the problem, and 1 minute to fix it – just switch web.config to use the SQL Providers that are already configured, just not as the default.

  <membership defaultProvider="AspNetSqlMembershipProvider">

 

The bottom line: the Universal Providers are not yet available on Azure servers, so you have to go with the legacy SQL Providers.  These providers, as always, require you to run the ASPNET_REGSQL tool to create the required database schema before you hit the site.

  • Universal Providers: create their required DB schema on first use, do not have the aspnet_xxx prefix on the tables, and do not use any views or stored procedures.
  • SQL Providers: require you to run ASPNET_REGSQL before first use, the tables are namespaced with “aspnet_” in front, and there are views and stored procedures that go along with the tables (all created by the ASPNET_REGSQL tool)

 
 
 
 

Azure 101 – Billing (when 1 minute equals 5 hours and 2 might equal 10)

I thought I’d start a blog series all about Azure – things I’ve learned while getting a few web sites up and running in the (Microsoft) cloud.  First up – billing.  Microsoft offers a “free” 90-day trial to get your feet wet, but BE CAREFUL, may not be free, especially when you quickly go over your quotas and start getting charged.

A lot of this information is available deep in the dark depths of the Azure SDK help, and you might just tell me to RTFM, but even as a Microsoft support engineer admitted to me:  “there is WAY too much documentation out there, it’s impossible to find anything”.  My thoughts exactly, and this is the reason for this quick post to summarize the information.

There are multiple “billing meters” used to charge you for your Azure usage:  Compute Time, Database Size, Storage Amount, Storage Transactions, Data Transfer (in and out), and a couple others.  I will focus on what we have found to be both the most misunderstood, as well as the meters for which you will most probably go over the free quota if you don’t know what to watch for:  Compute Time and Storage Transactions.

Rule #1 – Compute Hours are NOT just the time your site is WORKING on a request

My first assumption was that if my site is hosted up there, but it’s not being hit very often, I won’t incur many (any?) compute hours.  Turns out, compute hours are a measurement of how many hours your site (more correctly your instances) exist on the Azure servers.  Back when I was starting with Azure, I wrote a super simple ASP.NET MVC “Hello World” app and published it to Azure – super easy.  I first published to staging, incremented the instance count to 2 just to see how easy that was, and then published a new version to production.  Out there, I set the instance count to 3, and learned how to “flip the VIP”, switching staging with production in (not exactly) the blink of an eye.

Fast forward 5 days later, and we get our first billing email, saying that we’ve reached the trial period’s limit of 700 COMPUTE HOURS!  Seriously?  3 instances in production, 2 in staging, but nobody is hitting these sites, nobody would know they’re out there (for that matter the staging one has a GUID in the URL so it’s not discoverable by accident).  Long story short, after further reading, I find that a compute hours is any portion of an hour that any of your instances (staging or production) are deployed to the cloud!

A couple more little gotchas:

  • You get charged the first hour(s) the first minute your app gets deployed.  When the clock strikes 12, the next hour starts.  So if you deploy a single instance at 10:50AM, you will get charged 1 hour for the first 10 minutes, and then at 11:00, the second hour is charged.  If you’re unlucky enough to deploy 5 instances in the 59th minute of an hour, you will get billed for 10 hours in 2 minutes of human time.
  • Each time you deploy your project, a new clock is started.  If you are iteratively trying something out and deploy 3 versions in an hour, you get hit for 3 hours (assuming single instance on a small VM).
  • A “VM size factor” is used to multiply the hours for the larger VMs.  A small instance (counts as 1 core) is a value of 1, medium 2, large 4, extra large 8.

To summarize the compute hours – here is the formula:

# of instances (each web and worker role instance counts individually, for both staging and production) * VM size factor * number of partial clock hours

Rule #2 – Turn off diagnostics before you publish to Azure

This one is really buried.  There is a setting in the properties for each Azure role that allows you to turn diagnostics on or off – it’s ON by default.  Diagnostics are captured by Azure on the local VM, and then persisted to YOUR storage account VERY FREQUENTLY if you have this turned on.  The problem is, each time the logs are saved to your storage account, you get hit with storage transactions.  In those first 5 days of my Hello World app, we were averaging over 7000 transactions per day.  When you only get 50,000 transactions in the trial period, it’s easy to see how you can go over.  The MS support engineer told me the default is that the logs are persisted EVERY MILLISECOND.  I have trouble believing that, but that’s what he said.  It’s “really fast” whatever it is.

Turn OFF diagnostics unless you need them, and if you need them, throttle down the period after which they’re persisted to storage.

Right click, properties on each role in your Azure project.  In Configuration, clear the “Enable Diagnostics” checkbox.

CropperCapture80CropperCapture50

If you DO need logging enabled, there are ways to control exactly what data gets logged and the period at which it’s written to your storage account.  You can either use code in your OnStart method, or place a configuration file in your storage account to control the settings.  I plan on writing a post about the configuration based method soon.

Bottom Line

I think the bottom line with all this Azure billing – watch your bill (you can view it online through the portal) like a hawk, even if you have a simple app and are just in the “free” trial period.  Always know what unit of measure you’re dealing with:  human minutes, or Azure hours.  :)

Kinect in Windows 8 on .NET Rocks

My coworker Danny Warren and I recorded a .NET Rocks session a couple weeks that just went live tonight.  We discuss how we got a Windows 8 / WinRT application to communicate with the Microsoft Kinect.  I blogged about how we pulled that off here, but check out the podcast to hear it first hand.

.NET Rocks show #714 – Dan Hanan and Danny Warren Mix Kinect and Metro

Snoop 2.7.0

This post is long overdue. I have written about Snoop for WPF a couple times, and it’s been a while since I gave an update.

Back in September, we published v2.7.0 (release announcement). One of the coolest new features is the ability to drag and drop a cross-hair to the app you want to snoop. You no longer have to wait for the list of applications to refresh.  We added a DataContext tab and various bug fixes.

CropperCapture35

I’ve been spending a bit of my RECESS time lately getting back into contributing to the code base. We have some really cool features in the works — I’ll post again when the next version hits the streets.