Sideloading Windows Store Apps – Purchase the Key

Back in April, Microsoft announced that it was making it much easier to obtain a sideloading key for deploying “internal” line of business Windows Store applications. Until then, it was ridiculously prohibitive to acquire a key, so the sideloading story was crippled.  The above link (and this one) has the details, but suffice it to say that you are now able to get a sideloading key for $100. Sounds easy, right?

I set out to buy a key for us to use at InterKnowlogy, but … I searched high and low for information on WHERE to buy such a key. We get our volume licensing via our Microsoft Gold Partnership, and that’s not one of the qualifying methods for already having a sideloading key.  WHERE can I buy the key?

After many calls, I find that the Microsoft Volume License Service Center does not sell it, but instead recommends a volume license re-seller.  (I’m not trying to buy a volume license, just a single license for unlimited sideloading.)  I assume there are lots of volume license re-sellers, but that I ended up with was Software House International (SHI).

LONG story short:  this key is being offered as part of the Open License Program, which allows you to setup an account even though you haven’t or won’t be buying LOTS of (volume) licenses.

Setup the account, purchase the “Windows Sideloading Rights” license (QTY 1), part #4UN-00005.

No good.  You must buy at least 5 items to qualify for a “volume license”.  WHAT?  I only need a single license, that gives me UNLIMITED sideloads.  Why would I need more than one?

The fix (salesman’s idea): find the cheapest thing in their catalog and buy 4 of them:  “Microsoft DVD Playback Pack for Windows Vista Business” (QTY 4).  $4.50 each!!

Make the purchase, $111.58, and now I have some sweet DVD playback software to give away to developers as prizes!  :)  Download the key, and next blog post, I’ll show you how to use the key to sideload.

Really cool that Microsoft made it cheap to get a sideloading license, but the mechanics of the process (at least to purchase) are still pretty wonky.

(We have taken this approach most recently with the “Magic Wall” software that we built for CNN.)

WPF Round Table Part 1: Simple Pie Chart


Click here to download code and sample project

Over the years I have been presented with many different situations while programming in WPF, which required a certain Control or class to be created to accommodate. Given all the various solutions I created throughout the years I thought it might be helpful to someone else. During this ongoing series I am going to post some of the more useful classes I have made in the past.

Simple Pie Chart

In one project I was assigned to redesign, there was data coming in that we wanted represented in the form of a pie chart. Initially, we simply displayed the information in the form of one out of many static pie chart images. A specific image would get selected based on what the percentage was closest. Although this solved our immediate needs I believed generating this with GeometryDrawing would make the chart much more accurate and should not be too difficult to create. My immediate goal was to try and represent some type of pie chart in XAML to get an idea of how it could be represented dynamically. Initial searching led to this solution involving dividing a chart into thirds. Following the example given will produce a subdivided geometric ellipse:

Pie Chart Example - 1

Programmatically Build Chart

Unfortunately, using strictly XAML will not work when attempting to create a pie chart dynamically. This is definitely a great starting point in how we could create this Control, but I needed a better understanding how to create geometric objects programmatically. Doing some more searching I came across this Code Project that describes how to create pie charts from code. My pie chart will be much simpler containing only two slices and taking in a percentage value to represent how the slices will subdivide. I still use an Image to represent how the geometry will be drawn and begin the creation of the root elements:

_pieChartImage.Width = _pieChartImage.Height = Width = Height = Size;

var di = new DrawingImage();
_pieChartImage.Source = di;

var dg = new DrawingGroup();
di.Drawing = dg;

Since I know my starting point of the pie will always be at the top I then calculate where my line segment will end (the PieSliceFillers are brushes representing the fill color):

var angle = 360 * Percentage;
var radians = ( Math.PI / 180 ) * angle;
var endPointX = Math.Sin( radians ) * Height / 2 + Height / 2;
var endPointY = Width / 2 - Math.Cos( radians ) * Width / 2;
var endPoint = new Point( endPointX, endPointY );

dg.Children.Add( CreatePathGeometry( InnerPieSliceFill, new Point( Width / 2, 0 ), endPoint, Percentage > 0.5 ) );
dg.Children.Add( CreatePathGeometry( OuterPieSliceFill, endPoint, new Point( Width / 2, 0 ), Percentage <= 0.5 ) );

My CreatePathGeometry method creates both the inner and outer pie slices using a starting point, the point where the arc will end, and a boolean for ArcSegment to determine how the arc should get drawn if greater than 180 degrees.

private GeometryDrawing CreatePathGeometry( Brush brush, Point startPoint, Point arcPoint, bool isLargeArc )
	var midPoint = new Point( Width / 2, Height / 2 );

	var drawing = new GeometryDrawing { Brush = brush };
	var pathGeometry = new PathGeometry();
	var pathFigure = new PathFigure { StartPoint = midPoint };

	var ls1 = new LineSegment( startPoint, false );
	var arc = new ArcSegment
		SweepDirection = SweepDirection.Clockwise,
		Size = new Size( Width / 2, Height / 2 ),
		Point = arcPoint,
		IsLargeArc = isLargeArc
	var ls2 = new LineSegment( midPoint, false );

	drawing.Geometry = pathGeometry;
	pathGeometry.Figures.Add( pathFigure );

	pathFigure.Segments.Add( ls1 );
	pathFigure.Segments.Add( arc );
	pathFigure.Segments.Add( ls2 );

	return drawing;

A better to visualize this is through a XAML representation:

And with that we are able to create quick an easy pie charts as shown here:

Pie Chart Example - 2

Multi Pie Chart

Although this is suitable for a two sided pie chart, but what if you wanted more? That process is pretty straight forward based off what we already created. By including two dependency properties to represent our collection of data and brushes, we only need to rewrite how my segments are created:

var total = DataList.Sum();
var startPoint = new Point( Width / 2, 0 );
double radians = 0;

for ( int i = 0; i < DataList.Count; i++ ) { 	var data = DataList[i]; 	var dataBrush = GetBrushFromList( i ); 	var percentage = data / total; 	Point endPoint; 	var angle = 360 * percentage; 	if ( i + 1 == DataList.Count ) 	{ 		endPoint = new Point( Width / 2, 0 ); 	} 	else 	{ 		radians += ( Math.PI / 180 ) * angle; 		var endPointX = Math.Sin( radians ) * Height / 2 + Height / 2; 		var endPointY = Width / 2 - Math.Cos( radians ) * Width / 2; 		endPoint = new Point( endPointX, endPointY ); 	} 	dg.Children.Add( CreatePathGeometry( dataBrush, startPoint, endPoint, angle > 180 ) );

	startPoint = endPoint;

As you can see, the main difference is now we are accumulating the radians as we traverse the list to take into account any number of data objects. The result allows us to add any number of data items to our pie chart as shown here:
Pie Chart Example - 3


Although I did not get as much use for this class as I would have preferred, developing this helped me gain experience in manipulating geometry objects, which does not happen often enough.

What is CORS?

There are lots of instances that an app will need to call a GET/POST request to another domain (from a different domain where the resource originated). Once the web app starts doing the request, the response will throw an “Access-Control-Allow-Origin” error. Then you ask yourself, what now?

One solution is CORS (Cross-origin resource sharing), which allows all resources (like JavaScript) to make cross origin requests.
Here is an example of how to add CORS Rule to allow a request to Azure storage tables using Azure SDK.

1. Build the connection string

string connectionString= "DefaultEndpointsProtocol=https;
AccountName={account name/storage name};

2. Create the CloudTableClient

CloudStorageAccountstorageAccount = CloudStorageAccount.Parse( connectionString);
CloudTableClient client = storageAccount.CreateCloudTableClient();

3. Add CORS Rule
* as wildcard

CorsRule = new CorsRule()
  AllowedHeaders = new List<string> { "*" },
  AllowedMethods = CorsHttpMethods.Connect | CorsHttpMethods.Delete | CorsHttpMethods.Get | CorsHttpMethods.Head | CorsHttpMethods.Merge
	| CorsHttpMethods.Options | CorsHttpMethods.Post | CorsHttpMethods.Put | CorsHttpMethods.Trace, 
  //Since we'll only be calling Query Tables, let's just allow GET verb
  AllowedOrigins = new List<string> { "*" }, //This is the URL of our application.
  ExposedHeaders = new List<string> { "*" },
  MaxAgeInSeconds = 1 * 60 * 60, //Let the browswer cache it for an hour

4. Add rules to client

ServiceProperties serviceProperties = client.GetServiceProperties();
CorsProperties corsSettings = serviceProperties.Cors;
corsSettings.CorsRules.Add( corsRule );
//Save the rule
client.SetServiceProperties( serviceProperties );
  • After #4, there should already be cors rule connected to an account name.
    In order to double check what cors rules are there for that account name, we can use:

    ServiceProperties serviceProperties = client.GetServiceProperties();
    CorsProperties corsSettings = serviceProperties.Cors;

NOTE: If we need to put cors rule for blobs, we will just change CreateCloudTableClient():
CloudBlobClient client = storageAccount.CreateCloudBlobClient();

Observations on Xamarin Evolve 2014

Last week I had the privilege of attending Xamarin’s annual Evolve conference in Atlanta. There were great talks, great toys, and great food, and I had a blast. A few random high-level thoughts and observations on the state of the platform, the community, and Miguel’s pronunciation of popular acronyms:

  • Previously, I would have characterized Xamarin’s pitch as “You’re a C# / .NET developer? Leverage your existing knowledge to write iOS and Android apps.” Now, I’m convinced they’d like to present it more along the lines of “You want to do cross-platform mobile development? Learn C# / .NET and use our platform, because it’s the best.” The ambition and the roadmap are aggressive and impressive.
  • The new announcements were all pretty great. Sketches are like Swift’s Playgrounds for C#, Insights gives you robust cross-platform analytics for very little code overhead, and the new Android simulator is so much faster than Google’s it’s embarrassing. Add in the recently-announced Test Cloud that lets you run automated tests on actual devices over the network, and the amount this team has added to their product in the last twelve months or so is downright impressive.
  • The culture of the company and the community around it occupies a unique position at the intersection of all the different platforms and communities it supports. More corporate- and enterprise-friendly than Apple, more hip and independent than Microsoft, and less nerdy and creepy than Google. There was a pretty even split of iOS and Android phones on attendees, and almost all of the presenters used Macs (though many of those Macs were running Windows.) Eclectic, independent, high-energy, and unique.
  • The production value and logistics of the conference were phenomenal, almost on the level of WWDC/io/Build. The keynote was slick and very well-produced, and the after-hours social events throughout downtown Atlanta were a blast. And did I mention how good the food was?
  • The pace with which the engineering team has to run is intense. I was told by multiple Xamarin engineers that the road to getting full coverage of the hundreds of new APIs in iOS 8 on launch day involved many new hires and many sleepless nights. When you step back and look at the effort they’re undertaking, it’s both super-brilliant and more than a little insane. I would argue this reflects the personalities of its founders, but what do I know?
  • PCL and NuGet are properly pronounced “Pickle” and “Noojay”, respectively. Miguel said so.

Fantastic conference, great people, and I’m already busy hacking away on a project using some of the new stuff here at IK. If you’re interested in cross-platform mobile development, and especially if you have any background in Microsoft developer technologies, you owe it to yourself to check out Xamarin. And if you’re an engineer who’s ready to build awesome stuff with the latest tools like these, we’re always looking for new InterKnowlogists.

MVC Series Part 3: Miscellaneous Issues


In my first MVC series post, I discussed how to dynamically add items to a container using an MVC controller. Afterwards, I went through the process of unit testing the AccountController. The main purpose of this series was to explain some troublesome hiccups I ran into considering I did not come from a web development background. In this post I want to highlight a few of the minor issues while developing in MVC. One of them is not even related to MVC specifically, but it still caused enough of a headache that hopefully someone reading this can be spared the confusion.

HttpContext in Unit Tests

When I first started unit testing controllers, the HttpContext would return null when attempting to be accessed. The reason for this is because the controllers never assign the class on creation. Instead, controllers are typically created by the ControllerBuilder class. In my last post about unit testing the AccountController, I described a way to mock out the HttpContext, but in the beginning I wanted to try and keep my test project as lean as possible. Since I had not approached testing the AccountController yet and did not want to include a package to mock out an object I only needed to resolve NullReferenceExceptions, I found this clever post to quickly bypass this issue. By providing the HttpContext with a simple Url I no longer received an exception and was able to test the other components of a controller. I decided to wrap this functionality inside a class:

public class TestHttpContext : IDisposable
	public TestHttpContext()
		HttpContext.Current = new HttpContext(
			new HttpRequest( null, "", null ),
			new HttpResponse( null ) );

	public void Dispose()
		HttpContext.Current = null;

Since I am creating a new controller for each test, I needed the HttpContext to be recreated and destroyed each time. So, I went ahead and placed this inside a base test class that all controller tests will inherit:

public class TestBase
	private TestHttpContext _testContext;

	public void Initialize()
		_testContext = new TestHttpContext();

	public void TestCleanup()

Mocking out the HttpContext would provide better unit testing standards, but my minimalist personality found this solution too good to pass for the time being.

DbContext Non-Thread Safe

After updating my project to use Unity, I decided to take better advantage of the dependency injection design pattern by making the DbContext a singleton to prevent having to constantly re-initialize the connection to our Azure database. Early on after this change it became apparent our website was very inconsistent when trying to write to the database. Since many changes were occurring during this time, I did not immediately presume the DbContext as a singleton was the cause until I ran into this post.

So it seems I could still gain a performance boost by only creating the DbContext once per thread call, but how could I do this implementation using dependency injection? “Fortunately”, a new version of Unity provides a LifetimeManager catered specifically to this called PerRequestLifetimeManager.

This solution dramatically reduced my refactoring costs to close to zero, which was very desirable at this point in the project where time constraints were becoming out of reach. Later, I did a more thorough research into DbContext and you will notice this is why I put ‘Fortunately’ in quotes. As this MSDN post mentions, PerRequestLifetimeManager is bad practice to use when dealing with the DbContext. The reason is because it can lead to hard to track bugs and goes against the MVC principle of registering objects with Unity that will remain stateless. Although our application never ran into issues after implementing this LifetimeManager, in the future it is best to simply create and destroy the DbContext every time.

Ajax Caching in IE

This last problem is not so much an MVC issue, but a cross browser bug. And it is not so much a bug as it is more of understanding there are different specifications for each browser. As I mentioned in my post for creating Dynamic Items, I was using ajax calls to dynamically modify the DOM of a container. Throughout the project though we would intermittently hear bugs when attempting to add an item and trying to save. Each time the bug would re-occur, I would view the problem area, look stupefied at the cause of the issue, check in a fix, and the problem would go away, only to show up again a week later. What was going on here? Especially since the files in this area had been untouched for weeks!

The problem? Internet Explorer and its aggressive caching. The other browsers are not this adamant about caching ajax calls, at least when it comes to developing testing. And the solution to the problem was a bit more demoralizing:

	async: false,
	cache: false,
	url: '/Controller/Action',
}).success(function (partialView) {
	// do action

One line of code solved weeks of headache Although any fairly seasoned web developer would probably speculate the browser being at fault, as someone who only ever has to deal with one set of specifications (.NET/WCF/EF/WPF/SQL) our team and I were not use to meticulously testing each new feature on every available browser. This meant although someone would find the bug in IE, but in retesting they may have coincidentally retested the feature in Chrome. Or, even worse, republishing the test caused the caching to reset so retesting the feature would get the pass the first time, but would not realize how broken it was until days later. All this means is we need to have a different method for testing web projects and to continue our understanding of how web development can act.


Working in MVC has been a great learning experience and helped continue my growth in developing in web. Despite my complaints and the hair-splitting, alcohol consuming  problems I do enjoy the breadth and stability MVC provides to the web world. I will continue my progress in the realms of web development and hope these small roadblocks will become less frequent, at the very least for my sanity’s sake.

This Is Why You Don’t Hire Good Developers ->

Laurie Voss, CTO of npm, nailing it over on Quartz: technical interviews are seriously hard, for both sides of the table. Here at IK we’ve put a lot of time and thought into how we screen engineering candidates, and our process is constantly evolving, but a few highlights that match up with Laurie’s thoughts:

    No algorithm quizzes. Ability to memorize trivia answers, while possibly indicative of intellectual passion and capacity, just doesn’t convey enough information about useful engineering skills – and quizzes like this can make an otherwise-fantastic candidate nervous and uncomfortable.
    No on-the-spot white boarding or over-the-shoulder code sessions. Again, stress and time constraints that are artificial and don’t give you an actual picture of engineer productivity. Especially short-sighted are exercises that intentionally limit access to Google or Stack Overflow: how productive would you be on a daily basis without access to both?
    Hiring for fit: Laurie does a great job of calling out what this doesn’t mean, which is hiring for friendship. Cultural fit is hugely important and difficult to define, but it’s easy to fall into the trap of hiring people that “look like they belong”, instead of finding people who bring something new to the table – while still fitting around it.

I’ve worked on a number of engineering teams throughout my career, and happen to have been involved with interviewing on every single one of them. (It’s pure coincidence that this has frequently involved a free lunch with the candidates, I swear.) Growing your team is one of the most important – and most difficult – tasks you face as a company, as it literally reshapes who you are and how you get things done. We’re constantly working at improving our own process, but I’m proud to be a part of such an awesome team of talented folks – something that’s only possible because of how much work we put in to our engineering interview process.

Want to see for yourself just how awesome our technical interviews are? We’re always looking for smart people who get things done.

P.S. Laurie’s piece is seriously great. You should read the whole thing. ->

Unit Testing ASP.NET WebAPI Controllers

Having just written some ASP.NET WebAPI controllers I then needed to accomplish the task of creating Unit Tests for them.  The tests would need to accomplish model validation via DataAnnotation and Json.NET attributes, authentication for all HTTP methods except GET (did I mention IIS was involved here) and in the process exercise an underlying SQL data layer (see UNIT TESTING USING LOCALDB).  I also wanted to write as few Unit Tests as possible and limit mocking but still exercise the whole pipeline.

Some URLs that helped me immensely were:

Some very important things I wanted to accomplish were:

  • Exercise the complete pipeline but without the use of IIS (in any flavor) and without SQLServer (although I guess you could argue that LocalDB is a version of SQLServer)
  • Write the unit tests using code similar to what I would be using in Production code.  This requirement meant I could not use some of the alternate methods for testing Controllers by using a Controller context or special code to make sure DataAnnotation validation was occurring.

The end result eventually boiled down primarily to a single method which makes use of an in memory HttpServer, a reference to the static WebApiConfig class from the Web project containing the controllers and some code to add Basic Auth as needed to the request.  The only configuration necessary in the Unit Test app.config was the <system.web><authentication/></system.web> information necessary to enable authentication and the <connectionString/> section for access to the LocalDB database used in the Unit Tests.

The main method looks like:

    protected static Tuple SendRequest(HttpMethod method, Uri uri, HttpContent content = null, 
	string username = null, string password = null)
	HttpConfiguration config = new HttpConfiguration {IncludeErrorDetailPolicy = IncludeErrorDetailPolicy.Always};
	// The WebApiConfig class is from the Web project being tested.
	HttpServer server = new HttpServer(config);
	using (HttpMessageInvoker client = new HttpMessageInvoker(server))
	    using (HttpRequestMessage request = new HttpRequestMessage(method, uri.ToString()) { Content = content})
		if (!string.IsNullOrWhiteSpace(username) && !string.IsNullOrWhiteSpace(password))
			"Basic " +
					.GetBytes(string.Format("{0}:{1}", username, password))));

		using (HttpResponseMessage response = client.SendAsync(request, CancellationToken.None).Result)
		    return new Tuple(response.StatusCode, response.Content as ObjectContent);

This allowed me to write fairly simple code in the Unit Tests themselves while still allowing the Unit Test to exercise model validation, authentication, data access layer AND the Controller methods themselves such as:

    public void GetSingleEntity_That_Does_Not_Exist_Should_Return_NotFound()
	// TestInitialize should have cleared the database.
	Tuple result = SendRequest(HttpMethod.Get, new Uri("http://localhost/api/entity/1"));
	Assert.AreEqual(HttpStatusCode.NotFound, result.Item1);

Unit Testing using LocalDB


I recently wrote some code with the typical data access layer as an interface to SQLServer.  The code I wrote doesn’t include a UI and has various operations occurring in its pipeline including authentication, model validation, data aggregation and so forth.  So now that the code is written it’s time for Unit Testing right?

Anybody that has had to write Unit Tests has at some point encountered issues with testing scenarios that include some kind of server.  A typical server issue when associated with a Unit Test is the need to build and/or reset the state of the server so that data is “just right” so that the Unit Test can perform its job.  That server might be IIS, SQLServer, SharePoint or something else but as soon as any server is introduced into the mix there is an immediate desire to pull back and start mocking things to remove the server issues. 

Unfortunately that means (when dealing with a database server) that stored procedures, table/column definitions and things like unique or check constraints, SQL data access layer and transaction/nested transaction commit/aborts won’t get tested.

Desiring to test these things along with everything else in the pipeline I thought that LocalDB might be the answer to my issues.  This article is about implementing that.

Generating/Updating the LocalDB Database


To start the process I created everything I need in a local SQLServer instance.  This makes it easy to implement and test since there are very good tools to do this.

Once I had all of it working in my local SQLServer I created a Visual Studio 2013 SSDT ‘SQL Server Database Project’.  The great thing about this project is how it makes it so easy to move changes from one database to another.  In my case, every time I made a change to the database, I needed to move the changes to a the VS database project (source control), SQL Azure database AND to my Unit Test LocalDB.  Doing all of this for any change takes less than 5 minutes each time.

One quirk I ran into with the ‘SQL Server Database Project’ is that schema compare wouldn’t connect to the LocalDB that was in my Unit Test project.  I eventually ended up leaving the LocalDB in the APP_DATA folder in a Web project where I would make the updates and then file copy the database into the Unit Test project.

The database files are stored at the root of the Unit Test project.  I would prefer to store them in a folder but idiosyncrasies with the [DeploymentItemAttribute] and changes in behavior on how the Unit Test runs in Release versus Debug caused me to just leave it in the root of the project and configure the [DeploymentItemAttribute] to also copy the database files into the root of the test location rather than a folder.

Configure the LocalDB Database


Mark the database files as ‘Content’ and ‘Copy Always’ and attribute each Unit Test with:


The connection string I left in the app.config for the Unit Test project and it presented the next challenge.  When a Unit Test is run the actual on disk location of the database file will vary.  Combine this with the need to use AttachDbFilename in the LocalDB connection string and you could create some interesting code pulling out connection string, figuring out directories and using string.Format to doctor the connection string before use.  However the location in the code that actually pulls the connection string from the configuration was deep within the SQL data layer and I didn’t want to try to modify the code to work with both Unit Test and Production.  Thankfully I found the answer to this at  Combining using ‘|DataDirectory|’ in the connection string in the app.config with the following code in each test class solved that problem.

	public static void ClassSetup(TestContext context)
		Path.Combine(context.TestDeploymentDir, string.Empty));

Reset Database State for Each Test


So now the Unit Test can use the LocalDB but I also need to reset the database state before each Unit Test runs.  I could figure out a scenario like detaching the database (if it’s attached), recopying the database files and reattaching it before each test but I thought it would be easier to just use the same database and in [TestInitialize] I could just truncate all of the tables. 

Unfortunately all of the tables have identity columns, foreign keys, check constraints and all of the usual things you find in a database.  This meant I couldn’t run a SQL script in [TestInitialize] to just truncate all of the tables.

I then decided I’d delete all of the rows from each table and use DBCC CHECKIDENT to reset the identity columns so I could guarantee row ids in objects that were inserted into the SQL tables.  This led me down an interesting path.

Look at the documentation on DBCC CHECKIDENT and you’ll find the following in the documentation


The highlighted text is inconsistent with the behavior of SQLServer 2014 and LocalDB v11.0.  I didn’t test with any other versions of SQLServer or LocalDB so I don’t know if they have these issues as well but the actual behavior (you can decide for yourself which SQL is obeying the documentation as it’s still not clear to me) when using “DBCC CHECKIDENT(‘MyTable’, RESEED, 0)” after ‘DELETE FROM MyTable’ is:

  • SQLServer 2014 – The next row inserted has a row id of 1
  • LocalDB v11.0 – The next row inserted has a row id of 0!!!!

What?  I didn’t even know it was possible to have a Row ID of 0.  After many trials and tribulations and wondering if I needed to rethink using LocalDB I came up with the following SQL script that is run in [TestInitialize] (it runs before every test):




	INSERT INTO MyTable(Title) VALUES('test')
	IF (@ID_TO_CHECK > 0)

	-- Do more tables

	DECLARE @ErrorMessage NVARCHAR(4000)
	DECLARE @ErrorSeverity INT
	DECLARE @ErrorState INT

	SELECT @ErrorMessage = ERROR_MESSAGE(), 
		   @ErrorSeverity = ERROR_SEVERITY(), 
		   @ErrorState = ERROR_STATE()

	RAISERROR(@ErrorMessage, @ErrorSeverity, @ErrorState)

This makes sure that when the test runs the next row that is inserted into the table will have a Row ID of 1 (NOT 0!!).

devLink 2014: Presentation Materials

Super stoked to present at devLink this year. I’m trying to get with the times and provide my materials before hand so you can get your hands on them before you forget about them. You can find everything from my slide deck to code in my GitHub repos.

Crawl Walk Talk – Windows Phone App Lifecycle and Cortana API

Repo: CrawlWalkTalk

Master Windows 8.1 Location and Proximity Capabilities

Repo: WindowsLocationAndProximity

For those that see this before I hope you attend, and for all those that attend thank you so much for coming and please let me know what you thought of the presentation and materials. I’m always trying to improve. For those that find these materials after the presentation please check them out and let me know if I can answer any questions you may have!

MVC Series Part 2: AccountController Testing


Click here to download code and sample project

In my first post of the series, I explained the perils and pitfalls that I had to overcome with dynamically adding items. One of the next problem I ran into was dealing with unit testing the AccountController. More specifically, attempting to represent the UserManager class. Since unit testing is a fundamental necessity for any server project, testing the controller was a necessity.

Attempting to Test

So, let’s first create a test class for the AccountController and include a simple test for determining if a user was registered. Here is how my class first appeared:

public class AccountControllerTest
	public void AccountController_Register_UserRegistered()
		var accountController = new AccountController();
		var registerViewModel = new RegisterViewModel
			Email = "",
			Password = "123456"

		var result = accountController.Register(registerViewModel).Result;
		Assert.IsTrue(result is RedirectToRouteResult);
		Assert.IsTrue( _accountController.ModelState.All(kvp => kvp.Key != "") );

When running the unit test I get a NullReferenceException thrown when attempting to access the UserManager. At first I assumed this was due to not having a UserManager created, but debugging at the location of the thrown exception led me to this:

ApplicationUserManager UserManager
          return _userManager ?? HttpContext.GetOwinContext().GetUserManager<ApplicationUserManager>();
     private set
          _userManager = value;

The exception is actually getting thrown on the HttpContext property that is part of ASP.Net internals. We cannot assign HttpContext directly on a controller since it is read-only, but the ControllerContext on it is not, which explains how to do that here. We can create this easily enough by installing the Moq NuGet package to help mock this out. We will install the package and place the initialization of our AccountController into a initialize test class that will get called prior to every unit test:

private AccountController _acocuntController;

public void Initialization()
     var request = new Mock<HttpRequestBase>();
     request.Expect( r => r.HttpMethod ).Returns( "GET" );
     var mockHttpContext = new Mock<HttpContextBase>();
     mockHttpContext.Expect( c => c.Request ).Returns( request.Object );
     var mockControllerContext = new ControllerContext( mockHttpContext.Object, new RouteData(), new Mock<ControllerBase>().Object );

     _acocuntController = new AccountController
          ControllerContext = mockControllerContext

Now when we run our application we no longer have to worry about the HttpContext, but still there is another NullReferenceException being thrown. This time it is from the call to ‘GetOwinContext’.

Alternative Route

At this point, attempting to mock out all of HttpContext’s features seems like a never ending road. All we really want is the ability to use UserManager’s feature to register a user. In order for us to do that we will need to mock out the IAuthenticationManager. This is no easy feat considering how well embedded the UserManager is within the AccountController. Fortunately, a post mentioned here mentions the right direction for substituting the ApplicationUserManager.

What we want to do is create a new class, called AccountManager, that will act as an access to the UserManager. The AccountManager will take in an IAuthenticationManager and also a IdentityDbContext, in casewe need to specify the specific context. I decided to place this class in a separate library that both the MVC and unit test libraries can access. If you decide to do the same and copy the class from the sample project, most of the dependencies will get resolved except for the HttpContextBase extension ‘GetOwinContext’. The reason is because that extension needs Microsoft.Owin.Host.SystemWeb. You can simply install this dependency in your library as a Nuget package through this command:

  • Install-Package Microsoft.Owin.Host.SystemWeb

Now that we have our AccountManager, we need to make sure our AccountController will use this class rather than attempting to create the UserManager from HttpContext. This starts with the constructor, where now we will have it accept our manager rather than passing in a UserManager:

public AccountController( AccountManager<ApplicationUserManager, ApplicationDbContext, ApplicationUser> manager)
	_manager = manager;

Then we will change the access to AccountController.UserManager to use the AccountManager:

public ApplicationUserManager UserManager
		return _manager.UserManager;

Dependency Injection

Now the immediate problem with this is that MVC’s controllers are stateless and handle the creation of all the classes, including any objects that are injected into the class. Fortunately, Unity has dependency injection specifically for MVC that will allow us to inject our own objects. As of this writing, I went ahead and installed Unity’s MVC 5, which is referenced here. It’s a very seamless process to integrate Unity into your MVC project. After installing the package, open the Global.asax.cs, where your Application_Start() method is stored and add in ‘UnityConfig.RegisterComponents();’. Afterwards, in the App_Start folder, open the UnityConfig.cs file and register our AccountManager:

container.RegisterType<AccountManager<ApplicationUserManager, ApplicationDbContext, ApplicationUser>>(new InjectionConstructor());

We will also need to override our initialization process for the AccountController to ensure the AccountManager either gets the embedded HttpContext from the AccountController or one we provide during test:

protected override void Initialize( RequestContext requestContext )
	base.Initialize( requestContext );
	_manager.Initialize( HttpContext );

We will also need to remove the references to AuthenticationManager and instead have our AccountController reference the AccountManager’s AuthenticationManager. This will also cause our SignInAsync method to this:

private async Task SignInAsync( ApplicationUser user, bool isPersistent )
	await _manager.SignInAsync( user, isPersistent );

Mocking AccountController

Now we can run our application and register a user using our AccountManager. With this implementation in place, we simply need to mock out our IAuthenticationManager. Here is a post that describes a bit of the process. So, following suit, we go ahead and mock out the necessary classes for initializing up our test AccountController, all under the same Initialization class:

private AccountController _accountController;

public void Initialization()
	// mocking HttpContext
	var request = new Mock<HttpRequestBase>();
	request.Expect( r => r.HttpMethod ).Returns( "GET" );
	var mockHttpContext = new Mock<HttpContextBase>();
	mockHttpContext.Expect( c => c.Request ).Returns( request.Object );
	var mockControllerContext = new ControllerContext( mockHttpContext.Object, new RouteData(), new Mock<ControllerBase>().Object );

	// mocking IAuthenticationManager
	var authDbContext = new ApplicationDbContext();
	var mockAuthenticationManager = new Mock<IAuthenticationManager>();
	mockAuthenticationManager.Setup( am => am.SignOut() );
	mockAuthenticationManager.Setup( am => am.SignIn() );

	var mockUrl = new Mock<UrlHelper>();

        var manager = new AccountManager<ApplicationUserManager, ApplicationDbContext, ApplicationUser>( authDbContext, mockAuthenticationManager.Object );
	_accountController = new AccountController( manager )
		Url = mockUrl.Object,
		ControllerContext = mockControllerContext

	// using our mocked HttpContext
	_accountController.AccountManager.Initialize( _accountController.HttpContext );

Now we can effectively test our AccountController’s logic. It’s unfortunate this process was anything but straight forward, but at least we are able to have better unit test code coverage over our project.