IoT: Ping Pong Scoring via Netduino

Combining InterKnowlogy’s thirst for using the latest and greatest technology with our world famous ping pong skills provides the following result:  during RECESS, I am making a small device that allows us to quickly keep a digital score of our ping pong matches.

Internet of Things (IoT) is a hot topic these days, so I decided to implement the ping pong scoring system on a Netduino board.  I had dabbled with an older board a year or more ago, and was frustrated: one of the first things I wanted to do was make a call to a web API service, but there was no network connectivity.  Enter the newest board, the “Netduino 3 WiFi“.  It has a built-in button and LED, but it’s extensible by way of the 3 GOBUS ports, where you can easily hookup external modules.

Hardware setup

Netduino 3 board with Gobus modules

My shopping list

Required Software

Netduino is a derivative of the Arduino board, with the ability to run .NET Micro Framework (NET MF) code.  This means you get to write your “application” for the board using familiar tools like Visual Studio and .NET.  Here are the steps I went through (loosely following this forum post, but updated to current day):

Network Configuration

This board has WiFi (sweet!), which means you need to get it on your wireless network before you go much further.

Use the .NET Micro Framework Deployment Tool (MFDEPLOY) to configure WiFi on the board.

  • Target, Connect
  • Target, Configuration, Network
  • Set network SSID, encryption settings, network credentials, etc.
  • Reboot the device to take on the new settings!
  • GREEN LIGHT means you’re successfully connected to the network (yellow means it’s searching for a network)

Write Some Code!

After installing the VS plug-in, you now have a new project template.

File, New Project, Micro Framework – Netduino Application (Universal)

Go to the project properties and confirm two things:

  • Application, Target Framework = .NET MF 4.3
  • .NET Micro Framework, Deployment.  Transport = USB, Device = (your device)

On-board light

static OutputPort led = new OutputPort( Pins.ONBOARD_LED, false );
led.Write( true );

On-board button

NOTE: There is a bug with the on-board button in the current Netduino 3 board firmware. While your application is running, pressing the on-board button will cause a reset of the device, not a button press in your application. The work-around until the next version of the firmware is to reference the pin number explicitly, instead of using Pins.ONBOARD_BTN. See my forum post for more information.

static InputPort button = new InputPort( (Cpu.Pin)0x15, false, Port.ResistorMode.Disabled );

GO Button

Now attach a GOBUS button module and the code is a little different.  The Netduino SDK provides classes specific to each module that you use instead of general input / output port classes.

The natural way in .NET to react to button presses is to wire up an event handler.  The GoButton class has such the ButtonPressed event, BUT, there’s a bug in the firmware and SDK:  If you react to a ButtonPressed event and in that handler method (or anywhere in that call stack), you make a call on the network, the call will hang indefinitely.  I discuss this and the work around with others in the a Netduino forum post.

It’s kind of ugly, but instead of wiring up to the events, for now (until the Netduino folks get it fixed), you just sample the IsPressed state of the button in a loop.

Add a reference to Netduino.GoButton.

var goButton = new NetduinoGo.Button();
if ( goButton.IsPressed ) { // do something }

Go Buzzer

Add a reference to Netduino.PiezoBuzzer.

var buzzer = new NetduinoGo.PiezoBuzzer();
buzzer.SetFrequency(noteFrequency);

Talk to the Web!

You bought this board because it has WiFi, so you must want to call a web API or something similar.  In my case, I wrote a simple OWIN based Web API service, hosted in a WPF app that is my ping pong scoreboard display.  This gives me the ability to receive HTTP calls from the Netduino board & client code, straight into the WPF application.

So a call from the Netduino application code to something like http://1.2.3.4:9999/api/Scoring/Increment/1 will give player 1 a point!

I do this using the HttpWebRequest and related classes from the .NET MF.

// error handling code removed for brevity...
var req = WebRequest.Create( url );
req.Timeout = 2000;
using ( var resp = req.GetResponse() )
{
	var httpResp = resp as HttpWebResponse;
	using ( Stream strm = httpResp.GetResponseStream() )
	{
		using ( var rdr = new StreamReader( strm ) )
		{
			string content = rdr.ReadToEnd();
			return content;
		}
	}
}

In my case, the results from my API calls come back as JSON, so I’m using the .NET Micro Framework JSON Serializer and Deserializer (Json.NetMF nuget package).

var result = Json.NETMF.JsonSerializer.DeserializeString( responseText ) as Hashtable;
if ( result != null )
{
    Debug.Print( "Score updated: " + result["Player1Score"] + "-" + result["Player2Score"] );
}

Putting that all together, I have a couple physical buttons I can press, one for each player, and a WPF based scoreboard on the wall that removes any confusion about the score!

Hope you too are having fun with IoT!

 

Crossing The Finish Line With IMSA

This slideshow requires JavaScript.


As Tim called out the other day, we recently went live with a brand new mobile experience for IMSA, and I had the privilege of leading the engineering team on the project here at IK. The scope and timeframe of the project were both ambitious: we delivered a brand-new, content-driven mobile app with live streaming audio and video, realtime in-race scoring results, custom push notifications and more, across all major platforms (iOS, Android, and Windows), with custom interfaces for both phone and tablet form factors – all in a development cycle of about twelve weeks. It goes without saying that the fantastic team of engineers here at IK are all rockstars, but without some cutting-edge development tools and great partnerships, this would have been impossible to get across the finish line on time:

  • Xamarin allowed our team to utilize a shared codebase in a single language (C#, which we happen to love) across all of our target platforms, enabling massive code reuse and rapid development of (effectively) six different apps all at once.
  • Working closely with the team at Xamarin enabled us to leverage Xamarin.Forms to unlock even further code-sharing than would have been otherwise possible, building whole sections of the presentation layer in a single, cross-platform XAML-based UI framework.
  • On the server side, our partners at Microsoft’s continued world-class work on Azure made utilizing Mobile App Service (née Azure Mobile Services) a no-brainer. The ability to scale smoothly with live race-day traffic, the persistent uptime in the face of tens of thousands of concurrent users making millions of calls per day, and the ease of implementation across all three platforms, all combined to save us countless hours of development time versus a conventional DIY approach to the server layer.
  • Last but not least, being able to leverage Visual Studio’s best-of-breed suite of developer tools was essential to the truly heroic amounts of productivity and output of our engineering team at crunch time. And Visual Studio Online enabled the Project Management team and myself to organize features and tasks, track bugs, and keep tabs on our progress throughout the hectic pace of a fast development cycle.

The final result of this marriage between cutting-edge cross-platform technology and an awesome team of developers is a brand new app experience that’s available on every major platform, phone or tablet, and this is just the beginning – we have lots of great new features in store for IMSA fans worldwide. I’ll be following up with a couple more technical posts about particular challenges we faced and how we overcame them, and be sure to check out the next IMSA event in Detroit the weekend of May 29-30; I know I’ll be streaming the live coverage from my phone!

What is CORS?

There are lots of instances that an app will need to call a GET/POST request to another domain (from a different domain where the resource originated). Once the web app starts doing the request, the response will throw an “Access-Control-Allow-Origin” error. Then you ask yourself, what now?

One solution is CORS (Cross-origin resource sharing), which allows all resources (like JavaScript) to make cross origin requests.
Here is an example of how to add CORS Rule to allow a request to Azure storage tables using Azure SDK.

1. Build the connection string

string connectionString= "DefaultEndpointsProtocol=https;
AccountName={account name/storage name};
AccountKey={PrimaryKey|SecondaryKey}";

2. Create the CloudTableClient

CloudStorageAccountstorageAccount = CloudStorageAccount.Parse( connectionString);
CloudTableClient client = storageAccount.CreateCloudTableClient();

3. Add CORS Rule
* as wildcard

CorsRule = new CorsRule()
{
  AllowedHeaders = new List<string> { "*" },
  AllowedMethods = CorsHttpMethods.Connect | CorsHttpMethods.Delete | CorsHttpMethods.Get | CorsHttpMethods.Head | CorsHttpMethods.Merge
	| CorsHttpMethods.Options | CorsHttpMethods.Post | CorsHttpMethods.Put | CorsHttpMethods.Trace, 
  //Since we'll only be calling Query Tables, let's just allow GET verb
  AllowedOrigins = new List<string> { "*" }, //This is the URL of our application.
  ExposedHeaders = new List<string> { "*" },
  MaxAgeInSeconds = 1 * 60 * 60, //Let the browswer cache it for an hour
};

4. Add rules to client

ServiceProperties serviceProperties = client.GetServiceProperties();
CorsProperties corsSettings = serviceProperties.Cors;
corsSettings.CorsRules.Add( corsRule );
//Save the rule
client.SetServiceProperties( serviceProperties );
  • After #4, there should already be cors rule connected to an account name.
    In order to double check what cors rules are there for that account name, we can use:

    ServiceProperties serviceProperties = client.GetServiceProperties();
    CorsProperties corsSettings = serviceProperties.Cors;
    

NOTE: If we need to put cors rule for blobs, we will just change CreateCloudTableClient():
CloudBlobClient client = storageAccount.CreateCloudBlobClient();

Supporting iOS 6.1 and iOS 7 (Xamarin.iOS)

As a developer it’s always way more fun to play with the new “toys” a new framework or OS provides. The issue we usually run into is at what point are we able to start using the new functionality in applications we develop. With Apple and iOS, the adoption rate of new versions is so high and so quick that you only really need to worry about supporting the 2 latest versions. With that said, if you’re starting something brand new you’ll probably just want to build against the newest version, but if you’ve got an existing application that you want to function in both versions you’ll need to do some work.

1st: If you don’t have Xcode 5, copy the iPhoneOS6.1.sdk and the iPhoneSimulator6.1.sdk folders from your existing Xcode 4.6.3 installation somewhere else so you can still dev for iOS 6. The simulator SDK can also be downloaded after the upgrade, but this is faster since you already have it on your machine. The SDKs are located under /Applications/Xcode.app/Contents/Developer/Platforms. In Finder choose Go => Go to Folder to get to the desired directories. Copy both of these folders somewhere else for use later.

  • iPhoneOS6.1.sdk is located here /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/
  • iPhoneSimulator6.1.sdk is located here /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/

NOTE: If you’ve already upgraded to Xcode 5 you’ll need to download the Xcode 4.6.3 installer from the Apple Developer site located here. Once downloaded, open the .dmg file and right-click on the Xcode icon, choose “Show Package Contents” and then follow the same directory structure to get to the SDK folders.

2nd: In order to support iOS 7 make sure you are upgraded to OS X 10.8.5 and Xcode 5.

3rd: With Xcode 5 installed copy the iPhoneOS6.1.sdk and iPhoneSimulator6.1.sdk folders back to the directories you copied them out of. You should notice 7.0 versions of each SDK now in those directories as well.

4th: In Xamarin Studio under the iOS Build view in your project options, you’ll now notice that you have the ability to specifiy 6.1 as the SDK version as well as 7.0. Under the iOS Application view is the more important Deployment Target property. Set this to 6.1 in order for the application to still be deployable. Otherwise only iOS 7 devices will be able to get it from the app store. Making this change will allow it to be deployable for both versions, but you still have one more thing.

5th & Last: Apple made some drastic changes with how certain properties work, how the NavigationController’s title bar and the iOS status bar look and work, and a bunch of other things. You’ll need to add version specific code to handle/fix these changes. I created a VersionHelper static class to do the comparison logic for me. There are definitely other ways to accomplish this but here it is:

public static class VersionHelper
{
	private static Version _systemVersion;
	public static bool CurrentVersionIsGreaterThanOrEqualTo( Version versionToCompareAgainst )
	{
		if ( _systemVersion == null )
		{
			_systemVersion = new Version( UIDevice.CurrentDevice.SystemVersion );
		}

		return _systemVersion >= versionToCompareAgainst;
	}
}

With that you should be good to go.

C# to F#: I’m a Convert

In my previous blog post C# to F#: My Initial Experience and Reflections I wrote about learning F# and converting a C# formula model into an F# formula model. As of writing my previous post the jury was still out on performance. I am very happy to say that I have some very quantifiable results and I’m ecstatic to announce that F# took C# to school!

Formula Model

The formula model we created can be found here. The structure is essentially: Model contains many Leagues. A League contains many Divisions. A Division contains many Teams. A Team plays at every Stadium thus creating many StadiumTeamData objects. Each Stadium contains details. In the excel file you’ll find 2 Team sheets, a LeagueSummary sheet, a Stadiums sheet, and a Stadium Schedules sheet. The Stadium Schedules sheet contains the schedule for each Stadium found in the Stadiums sheet which is only a list of Stadiums and their details. Each Team sheet contains StadiumTeamData (a row of data) which is the lowest form of calculation in this model. The LeagueSummary sheet sums the 2 Team sheets and calculates 10 years of data which can be used to create a chart. Our sample apps do not chart as our test was not about prettiness, but rather about performance. The excel model is a very simple model. It was used only to prove the calculations were being performed correctly. In the source code included at the end of this article you will notice the existence of 2 data providers: MatchesModelNoMathDataProvider and PerformanceTestNoMathDataProvider. The matches model provider matches the excel scenario with 1 League, 1 Division, and 2 Teams and 2 Stadiums and a single mandatory Theoretical Stadium. The Los Angelos Stadium is ignored in code. The performance model however has 2 Leagues. Each League has 9 Divisions. Each Division has 10 Teams. Each Team references 68 Stadiums. There is also a single mandatory Theoretical Stadium. This gives a grand total of 12,240 StadiumTeamData instances. These instances represent the bulk of the formula work and in the case of PDFx the bulk of property dependency registrations.

Implementations

C# and PDFx

The first implementation we created was in C# and uses the PDFx (Property Dependency Framework). This implementation represents the pattern we have used for the last year for client implementations. Due to familiarity this implementation took about 16-24 hours to implement. Which is pretty fast. This is why we really like the PDFx. It helps to simplify implementation in C#. Because PDFx is a pull based approach no custom events are required. The PropertyChanged even triggers everything under the hood for the PDFx. There is a catch though. This means that each property in a chain of dependent properties will raise the PropertyChanged event. In our example of 12,240 StadiumTeamData instances this means that PropertyChanged is called roughly 500,000 times just on the first calculation of top level data. With all of the properties in existence properties are accessed 2,487,431 times and of those 1,176,126 are doing work to setup the required property dependency registrations. So at the end of the day the C# with PDFx implementation takes about 55 seconds to load the object model and another 24 seconds to run the first calculation for a grand total of 79 seconds to load the application. Another really bummer part of PDFx and that it’s currently not thread safe so it must run on the UI thread which means that for about 1:20 the application looks like it’s not doing anything. Very bad, very very bad! On top of that each time we change a value via slider on a single StadiumTeamData it takes about 6 seconds to finish calculating. Again blocking the UI thread. A very important detail to note is that when a single StadiumTeamData has an input value change only objects that depend on that StadiumTeamData and objects that depend on those object etc. are recalulated. This means that out of 12,240 StadiumTeamData instances only 1 is being recalculated and only 1 team, 1 division, 1 league, and the top level values of the formula model are being recalculated. We have been trying to improve PDFx performance for some time now, and we have a few more tricks up our sleeves, but most of the tricks are around load time not calculation time.

F#

After listening to a ton of .NET Rocks recently I’ve learned a lot about F#. I was so intrigued that I set out to create an F# implementation of the same formula model we created in C# and PDFx. The implementation took about 32 hours, but that’s also with a ton of research. By the end I think I could have written the entire thing in less than 16 hours which would be less time than the C# and PDFx implementation. I learned that functional programming lends itself to parallelization more than object oriented programming. Due to the fact that functional programming encourages an approach of not modifying values because everything is immutable by default the F# implementation can be run on a background thread as well. The cool part about all of this is the theory that many sub calculations can be run at the same time then aggregate the output to run a final answer calculation. Our current formula model is perfect for this approach. Because we no longer have a dependency on the PDFx to know when a property changes the PropertyChanged event is only raised once to trigger all calculations and is then only triggered once for each property that is updated by the output of the calculations so the UI will be able to respond. The object model takes a bit more than 1 second to load and the first calculation is done in another 2.5 seconds. The total load time is about 3.5 seconds. Compared to 79 seconds that’s 95% faster in F# just for load. Each subsequent calculation when a value changes via slider on a StadiumTeamData takes about 1.2 seconds. Compared to 6 seconds F# is about 80% faster for each calculation. Unlike the C# and PDFx implementation I have not optimized the F# formula model to only calculate the object tree that changed, instead all 12,240 StadiumTeamData instances are being recalculated each time and value changes in the entire object model. So we could still become more performant by only calculating the single StadiumTeamData that changed and the related team, division, league, and then the top level values of the formula model.

Results

A complete breakdown of my comparisons can be found in this excel file. I wanted to call out a few very important results in this post to wrap things up.

Readability

I used to think that C# and PDFx was very readable. And while it is for very simple models it can get unwieldy. F# however is the clear winner here. I reduced lines of code by the hundreds. I can see one entire formula in one file which is compact enough to fit on my screen at one time, versus C# and PDFx which takes up multiple files due to multiple classes, and it requires me to do a lot of scolling due to the amount of lines a single property takes up. This seriously increases maintainability.

Performance

When it comes to performance C# and PDFx were blown out of the water. Application load time was improved by 95% and calculation time was improved 80%. This is serious business here!

Time to Implement

This is a slightly skewed comparison due to experience. I was impressed by the fact that C# and PDFx took 16-24 hours and F#, a brand new language, took only 32 hours. I am convinced that I can write F# faster than C# using PDFx on future projects.

Next Steps

I will be diligently searching for opportunities to use F# in production client code. It is a no brainer to me. I agree with the statement from many of .NET Rocks podcast guests talking about F# and functional programming, “Every software engineer should learn F#!” It just makes sense!

Resources

Source Code: Formula Implementation Proving Ground

C# and PDFx Executable

F# Executable

Formula Excel Workbook

F# vs. C# Comparison Excel File

Creating an iOS Settings Bundle (Xamarin.iOS)

With the investigation I’ve been doing in building an iOS application using Xamarin, I’ve now gotten to the point where I wanted to put some settings into the iOS Setting app for my application. I found this nice thread that gave the 3 easy steps to set up a settings bundle.

It literally is as simple as written, but there were a couple gotchas that I ran into that I wanted to forward along.

  1. The Settings.bundle folder needs to be in the project root, NOT under the Resources folder (where some examples showed it). I’m not sure if this is a new change or not, but I spent some time banging my head against the wall over this one.
  2. If you do not register default values for your settings they’ll return the default for the data type. (ie. null for String, false for bool, etc….). The DefaultValue you specify in the Root.plist file for a setting is the default value the control will show, NOT the value of the setting. Below is an example of registering default values for settings.
  3. NSUserDefaults userDefaults = NSUserDefaults.StandardUserDefaults;
    NSMutableDictionary appDefaults = new NSMutableDictionary(); 
    appDefaults.SetValueForKey( NSObject.FromObject( true ), new NSString( "BooleanSettingKey" ) );
    userDefaults.RegisterDefaults( appDefaults );
    userDefaults.Synchronize();
    

Accessing the value for a setting is a simple as the following:

bool settingValue = NSUserDefaults.StandardUserDefaults.BoolForKey( "BooleanSettingKey");

One last thing that I found in an example solution here was how to listen in your app for when settings have been changed and wanted to pass it along:

NSObject observer = NSNotificationCenter.DefaultCenter.AddObserver( (NSString)"NSUserDefaultsDidChangeNotification", DefaultsChanged ); 
private void DefaultsChanged( NSNotification obj )
{ 	
// Handle the settings changed 
}

The biggest bummer I found in the iOS Settings Bundle is that everything is statically defined. So if you happen to have a collection of items that you want updated from a server, you’ll have to do this on your own inside your application.

Working with UITableView (Xamarin.iOS)

After setting up my solution, my next step was to figure out how to display a list in iOS with the ability to select an item. In WPF, my first thought would be to utilize the ListBox control, bind its ItemsSource to the underlying data, and define an ItemTemplate for how I want each item to look. Not so simple in iOS. There’s a nice list control called the UITableView that provides support for a number of neat things (indexed list, splitting the items into sections/grouping them, selection, etc…). However, to accomplish the most important part, hooking it up to data, you have to define a data source object that you assign to the .Source property of the UITableView. There are a number of ways to accomplish this, but I went with creating a source that inherits from UITableViewSource. Xamarin has a nice guide that I used for guidance (Working with Tables and Cells).

Coming from WPF and MVVM I wanted to make this source reusable rather than following all the examples that were hardcoded to a specific object type. So I decided to create an ObjectTableSource, and since I’m still in the early stages of development I decided to make use of the built-in styles for the cell appearance rather than making a custom cell. With this in mind I needed to make sure that the objects provided to my table source had specific properties for me to utilize so I created an interface called ISupportTableSourceand used that as the .

public interface ISupportTableSource
{
	string Text { get; }
	string DetailText { get; }
	string ImageUri { get; }
}
public class ObjectTableSource : UITableViewSource

When inheriting from UITableViewSource you must override RowsInSection and GetCell. RowsInSection is exactly what it sounds like, you return how many items are in that section. My current version only supports 1 section so it returns the total number of items. GetCell returns the prepared UITableViewCell. The UITableView supports virtualization of its cell controls so in order to get the cell you need to call the DequeueReusableCellmethod on the table view. In versions earlier than iOS 6 this will return null if the cell hasn’t been created yet. In iOS 6 and later you can choose to register a cell type with the UITableView and that will make it so a cell is always returned. However, going this path means that you can’t specify which of the 4 build in styles to use (since it is specified in the constructor only), so I refrained from registering the cell type and handle null. When preparing the cell I also loaded any images on a background thread so the UI is still responsive, but I’ll cover that in another post.

public override int RowsInSection( UITableView tableview, int section )
{
	return _items.Count;
}

public override UITableViewCell GetCell( UITableView tableView, NSIndexPath indexPath )
{
	// if there are no cells to reuse, create a new one
	UITableViewCell cell = tableView.DequeueReusableCell( CellId )
			       ?? new UITableViewCell( _desiredCellStyle, CellId );

	ISupportTableSource item = _items[indexPath.Row];
	if ( !String.IsNullOrEmpty( item.Text ) )
	{
		cell.TextLabel.Text = item.Text;
	}
	if ( !String.IsNullOrEmpty( item.DetailText ) )
	{
		cell.DetailTextLabel.Text = item.DetailText;
	}
	if ( !String.IsNullOrEmpty( item.ImageUri ) )
	{
		cell.ImageView.ShowLoadingAnimation();
		LoadImageAsync( cell, item.ImageUri );
	}
	return cell;
}

The remaining thing for making this usable was to provide a way to notify when an item has been selected. There isn’t any event on the UITableViewSource like I expected, instead I needed to override the RowSelectedmethod and fire my own event providing the object found at the selected row.

public override void RowSelected( UITableView tableView, NSIndexPath indexPath )
{
	ISupportTableSource selectedItem = _items[indexPath.Row];

	// normal iOS behaviour is to remove the blue highlight
	tableView.DeselectRow( indexPath, true );

	OnItemSelected( selectedItem );
}

Beginning Xamarin and Xamarin.iOS

I’ve recently started delving into using Xamarin and Xamarin.iOS to get an understanding of its capabilities and see what I could do with it. After my first little app, I have to say it is really impressive what the people at Xamarin have done!

I’m coming at this from years of experience in C# and WPF/Silverlight/XAML and I wanted to describe my initial findings. For this post I plan to cover the general setup.

Installation

My goal was to have an iPad application that does something similar to a Master-Detail set of views where I could display a collection of images. Because of this I wanted to get a grasp on both Xamarin Studio and the Visual Studio integration so I installed Xamarin for Windows (on my primary dev machine) and Xamarin for OS X (on a Mac Mini we have at our office) from here. NOTE: On Windows 8 run the Xamarin installer as Administrator otherwise it will error out near the end. The installation took a while because it downloaded everything that I needed to develop for iOS (and for Android since I wanted to look into that in the future too.), but once everything was installed I was able to start developing instantly.

The installation guides for Xamarin.iOS, located here, provided excellent instruction for hooking up Visual Studio 2012 to remote debug on the Mac Mini (Section: 6.2. Connecting to the Mac Build Host), and the Xamarin services that were installed made it so that I found and connected to the Mac immediately. On another note, Synergy is an amazing tool to use to share your keyboard and mouse between the 2 devices. You need to make sure both machines are on the same network though in order for it to work (quickly at least).

Solution Setup

With Xamarin installed, I decided to get my feet wet by following Xamarin’s Hello, iPhone guide. (They’ve done a very nice job with their guides covering a nice range of topics.) For someone who has never developed in Xcode (and is pretty much a Mac beginner) this provided a nice tutorial of the tool.

With a general idea of how to get started, I created my solution in VS 2012 and on the Mac I used “Finder” to connect to my machine and open up the solution in Xamarin Studio. I’ve found that if I want to use Xcode’s Interface Builder to design my UI I need to add the iPad View Controller via Xamarin Studio, since adding it in Visual Studio didn’t create the .xib file.

My next step was to set up a core library, since I desire to try Xamarin’s Android functionality in the future, to enable code reuse. Xamarin is currently developing support for referencing Portable Class Libraries (PCLs), but until they’ve got that functioning we have to go more manual routes. I created the PCL to hold my core files (model objects primarily), but went the route of linking to all those files in the iOS project. Once I work on the Android counterpart I’ll be able to update with whether I think it’s the right way to go.

EDIT: With the release of Xamarin.iOS 7.0.1, they’ve apparently fixed the PCL build issue I was running into. So I was able to remove the links to all the files and reference the PCL library instead!

Debugging

It is extremely nice to be able to dev and debug in VS 2012 while connected to the iOS simulator on the Mac. When running into exceptions being thrown, I found that, if it wasn’t immediately apparent as to what the exception was caused by, debugging from Xamarin Studio on the Mac provides better exception information.

I’ll go into more detail on discoveries I ran into in other posts, but overall the development process using Xamarin and Xamarin.iOS has been very interesting and enjoyable. I would definitely recommend it.

PDFx – Property Dependency Framework – Part I, Introduction

The PDFx is a lightweight open source .NET library that allows developers to describe dependencies between Properties in declarative C# code. Once relationships are registered, the framework monitors property changes and ensures that the INotifyPropertyChanged.PropertyChanged event is fired for all directly and indirectly dependent properties in a very efficient way. The library is available for WPF, Silverlight, WinRT (Windows Store) and Windows Phone.

I’ve developed the PDFx as an InterKnowlogy RECESS project and published the source code and examples on codeplex.

In a series of blog posts I am going to cover the library’s most important features:

Introduction

The Property Dependency Framework (PDFx) is a lightweight library that allows you to capture the inherent relationship among the properties of your classes in a declarative manner. It is common in applications to have properties whose values depend upon the values of other properties. These properties need to be reevaluated when a change occurs in the properties on which they depend. For example, if “A” depends on “B” and “C”, we need to reevaluate the value of A whenever either B or C changes. Furthermore, B and C may not even be direct properties of the same class as that in which A exists. Instead, they may be properties of another class, or even properties of the items within a collection.

In large applications, complex chains of such dependencies can exist. Although C# gives us the INotifyPropertyChanged construct to send out a change notification (typically to UI elements), no framework to our knowledge has allowed the network of dependencies to be specified in a straightforward manner, with the proper change notifications issued automatically. The PDFx framework allows the dependencies of properties to be specified declaratively, building from them an internal network of dependencies for you. Changes to any property in the network are then propagated automatically and efficiently.

The PDFx framework establishes a simple pattern for capturing the relationships of data in applications, and removes tremendous amounts of “plumbing”. It also helps clarify and enhance the role of the View Model within MVVM, reducing the scattering of business logic throughout value converters. Once you’ve made use of PDFx, you will likely wonder how the gap it fills has gone unaddressed for so long.

Example

Since pictures tell better stories than words, I would like to demonstrate the benefits of this workhorse with a small example:

PDFX.png

The depicted algebra hierarchy represents a C# class. Green circles stand for Input Properties while purple circles indicate calculated properties. The arrows show the underlying math operations as well as the property dependencies.
As the developer of such a scenario, you’re responsible to ensure that all directly and indirectly dependent properties get reevaluated when an input property changes. Furthermore, for efficiency reasons, you also want to ensure that all unrelated properties do not get reevaluated.

If, for example, Property D1 changes, it is necessary to reevaluate C1, B1 and A1.
However, a change of D3 requires a reevaluation of only C2, B1, B2 and A1.

Using the PDFx, you don’t have to manually hardcode those relationships anymore but can rather rely on the library taking care of this job for you.
All you have to do is register the relationships in human readable code within the implementation of a property:

//....
public int A1
{
	get
	{
		Property(() => A1)
			.Depends(p => p.On(() => B1)
			               .AndOn(() => B2));
		
		return B1 + B2;
	}
}

public int B1
{
	get
	{
		Property(() => B1)
			.Depends(p => p.On(() => C1)
			               .AndOn(() => C2));
		
		return 2*C1 - C2;
	}
}

public int B2
{
	get
	{
		Property(() => B2)
			.Depends(p => p.On(() => C2)
			               .AndOn(() => C3));
		
		return -C2 + C3;
	}
}

public int C1
{
	get
	{
		Property(() => C1)
			.Depends(p => p.On(() => D1)
			               .AndOn(() => D2));

		return D1 + D2;
	}
}

public int C2
{
	get
	{
		Property(() => C2)
			.Depends(p => p.On(() => D3));

		return 3*D3;
	}
}

public int C3
{
	get
	{
		Property(() => C3)
			.Depends(p => p.On(() => D4)
			               .AndOn(() => D5));
		
		return D4 + D5;
	}
}
//....

Advanced Features

  • Dependencies on properties of external objects
  • Dependencies on ObservableCollections
  • Property Value Caching
  • (Deferred) Callbacks for Property Changes

Main Benefits

  • The dependency registration resides within the Property Getter implementation. This way you’re likely to notice immediately that an update of the registration is necessary when you change a property’s implementation.
  • The PDFx fires the PropertyChanged event only for Properties that are directly or indirectly dependent on the changed source property, thereby guaranteeing a high level of efficiency.
  • Properties whose data relies completely on the value of other properties do not need to encapsulate backing fields. They can be implemented solely in the Property Getter, thereby ensuring full integrity.

Async or Asink?

More and more nowadays there is a push  to run more application code in an asynchronous manner to prevent blocking the UI thread and making the application unresponsive.  As the need for this type of programming becomes more prevalent thankfully the APIs to do so have also become easier to implement.  The last part (easier APIs) has led to more and more async code sprinkled through an application like the little candies on top of donuts.  Unfortunately the easier APIs also mean it’s easier to abuse the functionality by not properly implementing exception handling.

All of these async sprinkles can sink an application fast as code may be exploding left and right with the application user non the wiser.  From the users perspective everything seems like it’s ok (i.e. the app doesn’t abort or show error dialogs) but nothing seems to be working.

I’d like to take a moment to look at various ways to async a task and highlight the exception handling issues related to each.  To do this I am going to use the following program framework.  My goal each time is to first see whether an unhandled exception occurs and if so then to handle it appropriately.

	class Program
	{
		static void DoSomething(object state)
		{
			try
			{
				throw new InvalidOperationException();
			}
			catch
			{
				Console.WriteLine("Exception thrown in method DoSomething.");
				throw;
			}
		}

		static void Main(string[] args)
		{
			AppDomain.CurrentDomain.UnhandledException += CurrentDomain_UnhandledException;

			//
			// TODO: Insert async code that calls the method DoSomething
			//

			Console.WriteLine("Done.");
			Console.ReadLine();
		}

		static void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e)
		{
			Console.WriteLine("Entered UnhandledException handler.");
		}
	}

As I replace the “TODO:” with each type of async code I’ll “Start Without Debugging” in Visual Studio. 

If the program runs WITHOUT encountering an unhandled exception the window will look like the following.  This can happen in two different scenarios.  Either the program encountered an unhandled exception but the exception was gobbled up and nobody will ever know or we have implemented proper exception handling code to deal with the exception when it occurs.

image

If the program encounters an unhandled exception then the window will look something like the following in which the task will then be to implement proper exception handling code.

image

Background Thread Exception Behavior

Before we start into the meat of this article it’s important to note both the .NET Framework version and the existence of a specific configuration setting can change the behavior of an application when an exception occurs on a background thread.  For more information on this topic see http://msdn.microsoft.com/en-us/library/ms228965.aspx.

Briefly .NET 1.0 and .NET 1.1 allowed background threads to throw unhandled exceptions that WOULD NOT terminate the application.  The .NET Framework would terminate the thread itself but the application would be unaffected and more than likely the unhandled exception would go unnoticed.

For all other .NET Framework version this behavior can be reinstated (BAD IDEA!) using the application configuration setting:

	<configuration>
		<runtime>
			<!-- the following setting prevents the host from closing when an unhandled exception is thrown -->
			<legacyunhandledexceptionpolicy enabled="1" />
		</runtime>
	</configuration>

All of the code we use following this WILL NOT have the configuration setting enabled and will be using .NET 4.5.  That being said it very easy to get the same behavior without using the configuration setting on a newer .NET Framework version.  In fact it sometimes seems like this old style is still in effect.

System.Threading.Thread

Our first attempt will be to use the System.Threading.Thread object.  We’ll replace the // TODO: line with the code:

	// 1. Thread.Start
	Thread thread = new Thread(new ParameterizedThreadStart(DoSomething)) { IsBackground = false };
	thread.Start();

When this code is executed we see that an unhandled exception occurs.  Note that I set the IsBackground property to false. 

If this is set to true then no unhandled exception occurs.  What’s up with this?  I thought .NET Framework 2.0 and above didn’t swallow exceptions anymore?  Even though “technically” the .NET Framework does not swallow exceptions anymore, “effectively” it does UNLESS you go back at some point and “sync up” with the thread.  When using System.Threading.Thread this is done via a call to the Join method on the thread.  So if we modify the code to:

	// 1. Thread.Start
	Thread thread = new Thread(new ParameterizedThreadStart(DoSomething)) { IsBackground = false };
	thread.Start();
	thread.Join();

we see the proper behavior (that an unhandled exception occurs) whether the thread is a background thread or not.  When using this method to async something the only way to properly handle the exception is in the try/catch block in the DoSomething method.  Putting a try/catch block around the thread.Join() method does nothing even though with other async APIs that pattern works as we’ll see later.

delegate BeginInvoke/EndInvoke

The next attempt to async the DoSomething method will use the delegate async architecture.  A delegate provides the methods Invoke(), BeginInvoke() and EndInvoke().  Using the Invoke() method would be the equivalent of just calling the DoSomething method without any async wrapping.  A call to BeginInvoke() is what will cause the DoSomething method to run asynchronously.  To run this code replace the // TODO: line with the code:

	// 2. BeginInvoke/EndInvoke
	Action action = DoSomething;
	action.BeginInvoke(null, null, null);

We’re back to not seeing an unhandled exception.  We receive the message “Exception thrown in method DoSomething.” but that’s it.  No unhandled exception.  Just like with System.Threading.Thread we have to “sync up” with the thread at some point in order for the unhandled exception to be properly generated:

	// 2. BeginInvoke/EndInvoke
	Action action = DoSomething;
	action.BeginInvoke(null, action.EndInvoke, null);

and to properly handle the exception we can use the same method as with System.Threading.Thread and modify the catch block in the DoSomething method or we can use a try/catch block around the EndInvoke() like so:

	// 2. BeginInvoke/EndInvoke
	Action action = DoSomething;
	action.BeginInvoke(
		null,
		result =>
			{
				try
				{
					action.EndInvoke(result);
				}
				catch (Exception ex)
				{
					Console.WriteLine("Exception handled.");
				}
			},
		null);

Note that adding a Console.WriteLine() to the catch block isn’t REALLY properly handling the exception.  It’s just simulating proper handling code which would more than likely log the exception, possibly correct it and maybe even let it continue to percolate up the stack by using a throw; statement.

System.Threading.ThreadPool.QueueUserWorkItem

The next attempt to async the DoSomething method will specifically use the ThreadPool.  Some of the later methods use the ThreadPool implicitly but here we will do so explicitly.  To run this code replace the // TODO: line with the code:

	// 3. ThreadPool.QueueUserWorkItem
	ThreadPool.QueueUserWorkItem(DoSomething, null);

It’s refreshing to see that we don’t actually have to do any “sync up” with the thread.  And realistically you couldn’t even do it.  Regardless we find that in this specific instance the unhandled exception is properly generated.

To properly handle the exception we can use a method similar to that used with BeginInvoke/EndInvoke:

	// 3. ThreadPool.QueueUserWorkItem
	ThreadPool.QueueUserWorkItem(
		state =>
			{
				try
				{
					DoSomething(state);
				}
				catch (Exception ex)
				{
					Console.WriteLine("Exception handled.");
				}
			},
		null);

System.ComponentModel.BackgroundWorker

The next attempt to async the DoSomething method will use the BackgroundWorker class.  To run this code replace the // TODO: line with the code:

	// 4. BackgroundWorker
	BackgroundWorker worker =
		new BackgroundWorker();
	worker.DoWork += (sender, e) => DoSomething(e);
	worker.RunWorkerCompleted += (sender, e) => worker.Dispose();
	worker.RunWorkerAsync();

We find once again that in this initial case the unhandled exception is not properly generated.  With the BackgroundWorker there is no way to “sync up” without using things like status checking loops or ManualResetEvent.  We can however add code to properly handle the exception.

	// 4. BackgroundWorker
	BackgroundWorker worker =
		new BackgroundWorker();
	worker.DoWork += (sender, e) => DoSomething(e);
	worker.RunWorkerCompleted +=
		(sender, e) =>
			{
				if (e.Error != null)
				{
					Console.WriteLine("Exception handled.");
				}

				worker.Dispose();
			};
	worker.RunWorkerAsync();

System.Threading.Tasks.Task

The next attempt to async the DoSomething method will use the .NET 4 Task.  To run this code replace the // TODO: line with the code:

	// 5. Task.Factory.StartNew
	Task.Factory.StartNew(DoSomething, null);

Lately I’ve been finding that this is the most common pattern that somebody will use to async a method.  Most likely because it’s cool and it’s easy.  This particular pattern is why I wrote this article.  The problem is that this pattern, like the others, will allow the method called to throw an exception and the application code will be none the wiser.  Imagine 10, 20, 50, 100s of these spread throughout the code base all randomly aborting and the user is wondering why things don’t seem to work even though they see no obvious signs of problems, i.e. exceptions or error dialogs.

If we modify this code to “sync up” with the thread like so:

	// 5. Task.Factory.StartNew
	Task task = Task.Factory.StartNew(DoSomething, null);
	task.Wait();

We find that the unhandled exception is properly generated, however the task.Wait() like the thread.Join() kind of defeat the purpose of attempting to run the DoSomething method asynchronously.  To solve this problem use task chaining:

	Task.Factory
	    .StartNew(DoSomething, null)
	    .ContinueWith(
		    task =>
			    {
				    try
				    {
					    task.Wait();

					    // If the DoSomething method returned a result
					    // we could reference task.Result instead to
					    // trigger the exception if one occurred otherwise
					    // process the result.
				    }
				    catch (AggregateException ae)
				    {
					    ae.Handle(
							(ex) =>
								{
									Console.WriteLine("Exception Handled.");
									return true;
								});
				    }
			    });

In our case the DoSomething method is of type Action<object> and not Func<object, TResult>, i.e. it doesn’t return a result.  If the DoSomething method returned a result we could reference task.Result instead of using task.Wait() to “sync up” with the task and trigger the exception.  In this code we use the AggregateException.Handle method to handle the exceptions.  The .NET 4 task returns exceptions of type AggregateException.  How these differ from System.Exception is that they have an InnerExceptions (note that it’s plural) property as well as the inherited InnerException property.  An AggregateException can contain multiple exceptions (one from each task that was chained) and the AggregateException.Handle method will call the specified delegate for each of those exceptions.

Since the DoSomething method doesn’t return a result there is nothing to process after it’s done so we can tell the .ContinueWith task to only execute if a fault occurred like so:

	// 5. Task.Factory.StartNew
	Task.Factory
	    .StartNew(DoSomething, null)
	    .ContinueWith(
		    task =>
			    {
					try
					{
						task.Wait();
					}
					catch (AggregateException ae)
					{
						ae.Handle(
							(ex) =>
							{
								Console.WriteLine("Exception Handled.");
								return true;
							});
					}
			    },
		    TaskContinuationOptions.OnlyOnFaulted);