ContentControl “bug” in WinRT


I was recently on a WinRT project that required showing a hierarchy of assets that the user could browse through, ultimately leading to an “asset details” view where they can interact with the selected asset. One of the interactive features was to have a toggle button that would hide/show a details pane with the asset description. Throughout the asset hierarchy, the data templates would bind to Name and ImageUri properties on the assets to display them. When I got down to the asset details level, I needed to wrap the asset in a ViewModel to support the command that I needed to implement the description toggle.


After messing around with the built in Transition API trying to get my description pane to animate in, I realized that I needed to expose a separate Asset property on my VM so my description pane could bind to it via a ContentControl with a ContentThemeTransition, and to do the toggling I would just null the object in the VM and let the Transition work its magic. I tested this without the ContentTemplate set on the ContentControl (just bound directly to the extra Asset property I added to my VM), and it worked as I expected…..the description pane was hidden, and when i clicked the toggle button the pane animated in and showed that it was bound to an Asset object.


The problem started when I added the DataTemplate that was bound to the ImageUri and Name of that Asset property. When I tested it, the description pane was instantly visible before I clicked the toggle button.


After a lot of breakpoints and head scratching, my theory was that, even though the Content property of the ContentControl was bound to this seperate Asset that I added to the VM, it still looked up the tree to find those properties when the object was null. Since the VM itself had those same properties, the DataTemplate was binding to that and displaying the pane when it shouldn’t have been. Sure enough, I made a separate little class to hold the Name and Description properties, and named them differently, and it worked fine. The surprise for me here was that the ContentControl was looking up the tree for property binding when it’s content was null, and it only did it when a DataTemplate was defined. I’m not sure if this is a bug, or by design, but it can cause a headache if you’re not expecting it.

EmguCV: Rotating face images to align eyes

Sometimes to increase accuracy of face recognition algorithms it’s important to make sure the face is upright. For instance, in this image of Arnold, he is tilting his head, which may make it difficult to recognize him:

One way to pre-process this image is to rotate the it so the face is upright. The fastest way to do that is to find the eyes using a cascade classifier and then finding the angle between the eyes. This method AlignEyes will take an image and return one that is rotated upright:

public static Image<Gray, byte> AlignEyes(Image<Gray, byte> image)
     Rectangle[] eyes = EyeClassifier.DetectMultiScale(image, 1.4, 0, new Size(1, 1), new Size(50, 50));
     var unifiedEyes = CombineOverlappingRectangles(eyes).OrderBy(r => r.X).ToList();
     if (unifiedEyes.Count == 2)
           var deltaY = (unifiedEyes[1].Y + unifiedEyes[1].Height/2) - (unifiedEyes[0].Y + unifiedEyes[0].Height/2);
           var deltaX = (unifiedEyes[1].X + unifiedEyes[1].Width/2) - (unifiedEyes[0].X + unifiedEyes[0].Width/2);
           double degrees = Math.Atan2(deltaY, deltaX)*180/Math.PI;
           if (Math.Abs(degrees) < 35)
                   image = image.Rotate(-degrees, new Gray(0));
     return image;

EyeClassifier is a cascade classifier using the training file included with EmguCV called “haarcascade_eye_tree_eyeglasses.xml”. You can use whatever training you find works best.

And here is the result (the face has been cropped and masked in this image):

Automatic UI Transitions in Windows Store apps


When you’re making apps intended for modern touch hardware, it’s important that your UI feels alive, fluid, and in motion. Some of Microsoft’s XAML controls will give you this motion for free, like Panorama in Windows Phone and the FlipView in WinRT,  but other than that it was very difficult to duplicate the built-in animations and transitions of those respective platforms. The WinRT platform introduces the Transition API that applies to Controls and Containers that can apply a built in animation in a response to a predetermined trigger. Transitions are applied to individual controls using the Transitions property, and to Panels using the ChildrenTransitions property. For example, adding an EntranceThemeTransition to the ChildrenTransitions collection of a Grid will cause all children of the Grid to automatically slide in from the right when they first appear. Getting the default subtle slide in from the right is this simple…….

Adding Transitions via Designer Instead of XAML

Back when Visual Studio and Blend both served completely different purposes, I always preferred doing UI related  in Blend so that I can see all the Properties right in front of me and set them visually. In Visual Studio 2012, a lot of that ability to see and set Properties visually was added from Blend, and it makes setting and customizing Transitions very simple. Here’s how…


If you don’t already have the Document Outline and Properties Window visible ( they usually aren’t ), go up to View, click Properties Window, then go back to View, Other Windows, Document Outline. Here’s what mine looks like…



All we need need to do is select the Grid in the document and, in the Properties window on the right, click the small ellipse button next to the ChildrenTransitions. You should see a new dialog window to add ChildrenTransitions with a DropDown at the bottom. Select EntranceThemeTransition and click Add. You should see the following window, which will allow you to configure the Transition properties. You should see this window if you’re in the right place….



The new Transitions API is an easy, flexible way to add predefined system animations to your UI. I’ve barely scratched the surface of what can be done with this system, but I will be using them where appropriate and will make more posts when I figure out some of the more advanced features.

Send Your Automated Build via Hightail (Formerly YouSendIt)

Here at InterKnowlogy we are always looking for ways to optimize our business. We’ve been using automated builds for sometime now. They are seriously one of the best things since sliced bread! Who doesn’t love to see a giant green checkmark stating their check-in succeeded. Or yelling names down the hallway when someone else causes a huge red ‘X’ to show up due to a failed check-in. As long as those names are aimed at you that is… We’ve been struggling with one problem lately with our release builds. If we have someone working offsite and they need to get the build after it completes they have to VPN into our network, go to the build directory, and copy the deliverable to their local machine. We do a lot of really cool graphically intense applications, which can mean large deliverables. This then turns into a really long difficult process to get a single deliverable. After a lot of discussions we come up with a really cool idea to use Hightail.

Hightail (Formerly YouSendIt)

We use Hightail for sending and receiving large files as I’m sure many of you do. We love the feature allowing us to specify an expiration date on a file so it will no longer take up space on our account. I did a lot of research on DropBox, Box, SkyDrive, and a few other options and each had pros and cons. In the end, Hightail was superior. It supports expiration dates, emails, and an easy API.


We created a REST service that lives on our build server named the BuildSendService. This service is responsible for accepting a file path, expiration date, and details for sending the email with a link to the file. Information received is immediately added to a queue and which is then processed asynchronously. This allows the caller of the service to continue on to do more work instead of waiting for the file to be hightailed. The service asynchronously processes each item in the queue using the Hightail API, which is brain dead simple! It’s really great! Hightail takes care of the email for us once the file is uploaded. Recipients can then quickly download the file from their inbox! AWESOME!

SendBuild Custom Build Action

In order to leverage the BuildSendService most efficiently we built a Custom Build Action named SendBuild. In our Build Process Template, after the build has completed all other tasks and the final deliverable is ready for use in the final build directory, the SendBuild action contacts the BuildSendService to hightail the build to all desired recipients. It is not a mandatory action and is ignored if no recipients are specified. We wanted this action to be as fast as possible. That is why the BuildSendService accepts the parameters and queues the information immediately. This frees up our build server to process the next request immediately instead of waiting for a large deliverable to be uploaded to Hightail.


While this process is still new to us here at InterKnowlogy it is showing promise. Overtime, I’m sure we’ll tweak the current implementation to make it better and help use run more effectively and efficiently. Also, Hightail is just awesome! They have been super helpful answering all of our questions and pointing me in the right direction for development. They have a .NET sample application, which as far as I can tell implements each of their APIs. That was the best source of information. Their documentation is mostly good, but could use some extra explanations. If you want to do something similar for your build process let me know. I’m happy to help where I can.

Windows Services: File Watcher Implementation

File Watcher

File Watcher is an application that continuously monitor your files. You can define files or a whole directory to look after, and have a custom action that notifies you every time those files/directory have been changed (created, deleted, renamed or error had occurred).

Simple Implementation Steps:

1. Create a new FileWatcher and define values for its properties

FileWatcher watcher = new FileWatcher();
watcher.Path = Path.GetDirectoryName(file); 
watcher.Filter = Path.GetFileName(file);
watcher.NotifyFilter = NotifyFilters.LastAccess | NotifyFilters.LastWrite
           | NotifyFilters.FileName | NotifyFilters.DirectoryName;
  • Path of the directory to be monitored.
  • Filter: “.txt” only watch text files, “GetFileName(file)” only watch that specific file.
  • NotifyFilter: watch for changes in LastAccess, LastWrite, FileName, DirectoryName

2. Subscribe to Event Handlers
These will notify the user if there are changes to the file/directory the FileWatcher is watching.

watcher.Changed += FileChanged;
watcher.Renamed += FileRenamed;
watcher.Created += FileChanged;
watcher.Deleted += FileChanged;
private void FileRenamed( object sender, RenamedEventArgs e )
 Debug.WriteLine( string.Format( " {0} renamed to {1} at {2}", e.OldFullPath, e.FullPath, DateTime.Now.ToString( "MM/dd/yy H:mm:ss" ) ) );

private void FileChanged( object sender, FileSystemEventArgs e )
 Debug.WriteLine( string.Format( "{0} with path {1} has been {2} at {3}", e.Name, e.FullPath, e.ChangeType, DateTime.Now.ToString( "MM/dd/yy H:mm:ss" ));

3. Enable the FileWatcher.
NOTE: When Path is not set and EnableRaisingEvent is not set to true, specified file/directory will not be watched.

watcher.EnableRaisingEvents = true;

FileWatcher + Windows Services

Windows Services allows us to run assembly in the background. They are very suitable for the times when you need to continuously listen to incoming network connections, or monitor a directory or files.
Let’s make a Windows Service project by doing File -> New Project -> Windows Service
Do not forget to inherit from ServiceBase().

  • By default, we already have the Run() in Program.cs that creates an instance of our FileWatcherService class.
    ServiceBase[] ServicesToRun;
    ServicesToRun = new ServiceBase[]
          new FileWatcherService()
  • When we start our service, the service will call the OnStart() inside our FileWatcherService().

  • Like what was described above this post, we create an instance of the FileWatcher on OnStart() and subscribe to the events.
    OnStop(), we dispose the FileWatcher instance and unsubscribe to the events.

    protected override void OnStart( string[] args )
    _fileWatcher = new FileWatcher( currentDirectory );
    _fileWatcher.Changed += FileChanged;
    _fileWatcher.EnableRaisingEvents = true;
    protected override void OnStop()
    _fileWatcher.Changed -= FileChanged;
    _fileWatcher.EnableRaisingEvents = false;
  • a. Create an Installer: Add -> new Class
    b. After creating a new class, inherit the class from Installer [add reference to System.Configuration.Install]
    c. Create a new ServiceProcessInstaller() and ServiceInstaller — These install an executable that extend ServiceBase(), which is our FileWatcherService.
    d. Set the properties: ServiceName, Account, StartType, DisplayName, etc.
    NOTE: Set the ServiceName to be the same as the value of the ServiceName on the service itself.
    e. Add the 2 installers to the Installers collection.

    ServiceProcessInstaller processInstaller = new ServiceProcessInstaller();
    ServiceInstaller serviceInstaller = new ServiceInstaller();
    processInstaller.Account = ServiceAccount.NetworkService;
    serviceInstaller.DisplayName = "File Watcher Service";
    serviceInstaller.StartType = ServiceStartMode.Manual;
    serviceInstaller.ServiceName = "File Watcher Service";
    Installers.AddRange(new Installer[]
      processInstaller, serviceInstaller
  • Install the service and start running it!
    a. Run Command Prompt [Visual Studio Command Prompt or Windows SDK Command Prompt] as Administrator.
    b. Go to the directory where the Windows Service executable is at.
    c. Type installutil {name of the service} (e.g. installutil FileWatcherService.exe).
    d. On Success, you will be able to see the Service on your Local Services list.
    Windows Service

    Windows Service successfully installed

    e. To uninstall: type installutil /u {name of the service}

Supporting iOS 6.1 and iOS 7 (Xamarin.iOS)

As a developer it’s always way more fun to play with the new “toys” a new framework or OS provides. The issue we usually run into is at what point are we able to start using the new functionality in applications we develop. With Apple and iOS, the adoption rate of new versions is so high and so quick that you only really need to worry about supporting the 2 latest versions. With that said, if you’re starting something brand new you’ll probably just want to build against the newest version, but if you’ve got an existing application that you want to function in both versions you’ll need to do some work.

1st: If you don’t have Xcode 5, copy the iPhoneOS6.1.sdk and the iPhoneSimulator6.1.sdk folders from your existing Xcode 4.6.3 installation somewhere else so you can still dev for iOS 6. The simulator SDK can also be downloaded after the upgrade, but this is faster since you already have it on your machine. The SDKs are located under /Applications/ In Finder choose Go => Go to Folder to get to the desired directories. Copy both of these folders somewhere else for use later.

  • iPhoneOS6.1.sdk is located here /Applications/
  • iPhoneSimulator6.1.sdk is located here /Applications/

NOTE: If you’ve already upgraded to Xcode 5 you’ll need to download the Xcode 4.6.3 installer from the Apple Developer site located here. Once downloaded, open the .dmg file and right-click on the Xcode icon, choose “Show Package Contents” and then follow the same directory structure to get to the SDK folders.

2nd: In order to support iOS 7 make sure you are upgraded to OS X 10.8.5 and Xcode 5.

3rd: With Xcode 5 installed copy the iPhoneOS6.1.sdk and iPhoneSimulator6.1.sdk folders back to the directories you copied them out of. You should notice 7.0 versions of each SDK now in those directories as well.

4th: In Xamarin Studio under the iOS Build view in your project options, you’ll now notice that you have the ability to specifiy 6.1 as the SDK version as well as 7.0. Under the iOS Application view is the more important Deployment Target property. Set this to 6.1 in order for the application to still be deployable. Otherwise only iOS 7 devices will be able to get it from the app store. Making this change will allow it to be deployable for both versions, but you still have one more thing.

5th & Last: Apple made some drastic changes with how certain properties work, how the NavigationController’s title bar and the iOS status bar look and work, and a bunch of other things. You’ll need to add version specific code to handle/fix these changes. I created a VersionHelper static class to do the comparison logic for me. There are definitely other ways to accomplish this but here it is:

public static class VersionHelper
	private static Version _systemVersion;
	public static bool CurrentVersionIsGreaterThanOrEqualTo( Version versionToCompareAgainst )
		if ( _systemVersion == null )
			_systemVersion = new Version( UIDevice.CurrentDevice.SystemVersion );

		return _systemVersion >= versionToCompareAgainst;

With that you should be good to go.

Easy Pinch-to-zoom in WinRT

On a recent project, I was tasked with adding pinch-to-zoom for images that would be displayed in a WinRT app. Since this feature doesn’t come for free I started thinking of ways to handle the gesture input and apply the appropriate amount of scaling while still playing by the rules of MVVM. Much to my delight, I soon discovered that the ScrollViewer can handle this sort of thing be default. For the purpose of scaling an image, you’ll want to disable the Vertical and Horizontal scroll bars, and use an Image control as the content of the Scroll Viewer. The key properties on the ScrollViewer you’ll want to use to control the zoom level are MinZoomFactor and MaxZoomFactor. The defaults are 0.1 and 10, respectively, but in my case I needed to set the MaxZoom based on the resolution of the image itself. There are a number of ways to do this, and my solution uses a custom control for it. Here’s what the logic for that looks like in my control…….

This is the handler for the Opened event of the Image control. The Source property you see on the first line is a Dependency Property I exposed on this custom control to expose the Image Source property. The first if statement checks to see if the image is smaller than the screen in both directions, in which case I display the image at native size with no scaling. The next block handles images that are larger than the screen in either or both directions. In that case, I divide the bitmap pixel width and pixel height by the actual rendered width and height of the ScrollViewer and use the larger value as the MaxZoomFactor. The end result is that the image is scalable up to it’s native size. Here’s what the xaml of my default control template looks like…..


And that’s all there is to it! If you don’t want / need to go down the custom control path, you can very easily just use the ScrollViewer and Image controls and set the zoom values directly in Xaml or data bind them. Either way is much easier than handling the gestures and scaling manually.

Face Detection for .NET using EmguCV

First of all, let me explain the difference between face “detection” and face “recognition”.  There seems to be a lot of misinformation out there about these two terms and they are not interchangeable.  Face detection is when a computer finds all the faces that appear in an image.  The best algorithm out there right now is Viola-Jones method using cascade classifiers. The Viola-Jones method can actually be trained to detect any object, so it isn’t specific to detecting faces. For instance, it can detect an apple in a given image.

Face recognition is when a computer gives a name to a face image. There are many different algorithms for recognition including Eigen faces, Fischer faces, and Local Binary Pattern Histograms.

Okay, now that you know the difference between detection and recognition, I will show you how to do detection simple using a CascadeClassifier in EmguCV.

First, we need to construct a classifier using some of the built in training files. These can be found under the HaarCascades directory in the EmguCV installation directory. We make a new classifier like this:

private static readonly CascadeClassifier Classifier = new CascadeClassifier("haarcascade_frontalface_alt_tree.xml");

Secondly, classifiers only take grayscale image so we convert our Bgr image to gray:

 Image<Gray, byte> grayImage = image.Convert<Gray, byte>(); 

Finally, we call the DetectMultiScale method on our classifier:

Rectangle[] rectangles = Classifier.DetectMultiScale(grayImage, 1.4, 0, new Size(100,100),new Size(800,800));

Let’s review these parameters because you will probably need to tweak them for your system. The first parameter is the grayscale image. The second parameter is the windowing scale factor. This parameter must be greater than 1.0 and the closer it is to 1.0 the longer it will take to detect faces but there’s a greater chance that you will find all the faces. 1.4 is a good place to start with this parameter.

The third parameter is the minimum number of nearest neighbors. The higher this number the fewer false positives you will get. If this parameter is set to something larger than 0, the algorithm will group intersecting rectangles and only return those that have overlapping rectangle greater than or equal to the minimum number of nearest neighbors. If this parameter is set to 0, all rectangles will be returned and no grouping will happen, which means the results may have intersecting rectangles for a single face.

The last two parameters are the min and max sizes in pixels. The algorithm will start searching for faces with a window 800×800 and it will decrease the window size by the factor of 1.4 until it reaches the min size of 100×100. The bigger the range between the min and max size, the longer the algorithm will take to complete.

The output of the DetectMultiScale function is a set of rectangles that represent where the faces are relative to the input image.
It’s as easy as that. With just a few lines of code, you can detect where all the faces are in any image.

You can download EmguCV here:

C# to F#: I’m a Convert

In my previous blog post C# to F#: My Initial Experience and Reflections I wrote about learning F# and converting a C# formula model into an F# formula model. As of writing my previous post the jury was still out on performance. I am very happy to say that I have some very quantifiable results and I’m ecstatic to announce that F# took C# to school!

Formula Model

The formula model we created can be found here. The structure is essentially: Model contains many Leagues. A League contains many Divisions. A Division contains many Teams. A Team plays at every Stadium thus creating many StadiumTeamData objects. Each Stadium contains details. In the excel file you’ll find 2 Team sheets, a LeagueSummary sheet, a Stadiums sheet, and a Stadium Schedules sheet. The Stadium Schedules sheet contains the schedule for each Stadium found in the Stadiums sheet which is only a list of Stadiums and their details. Each Team sheet contains StadiumTeamData (a row of data) which is the lowest form of calculation in this model. The LeagueSummary sheet sums the 2 Team sheets and calculates 10 years of data which can be used to create a chart. Our sample apps do not chart as our test was not about prettiness, but rather about performance. The excel model is a very simple model. It was used only to prove the calculations were being performed correctly. In the source code included at the end of this article you will notice the existence of 2 data providers: MatchesModelNoMathDataProvider and PerformanceTestNoMathDataProvider. The matches model provider matches the excel scenario with 1 League, 1 Division, and 2 Teams and 2 Stadiums and a single mandatory Theoretical Stadium. The Los Angelos Stadium is ignored in code. The performance model however has 2 Leagues. Each League has 9 Divisions. Each Division has 10 Teams. Each Team references 68 Stadiums. There is also a single mandatory Theoretical Stadium. This gives a grand total of 12,240 StadiumTeamData instances. These instances represent the bulk of the formula work and in the case of PDFx the bulk of property dependency registrations.


C# and PDFx

The first implementation we created was in C# and uses the PDFx (Property Dependency Framework). This implementation represents the pattern we have used for the last year for client implementations. Due to familiarity this implementation took about 16-24 hours to implement. Which is pretty fast. This is why we really like the PDFx. It helps to simplify implementation in C#. Because PDFx is a pull based approach no custom events are required. The PropertyChanged even triggers everything under the hood for the PDFx. There is a catch though. This means that each property in a chain of dependent properties will raise the PropertyChanged event. In our example of 12,240 StadiumTeamData instances this means that PropertyChanged is called roughly 500,000 times just on the first calculation of top level data. With all of the properties in existence properties are accessed 2,487,431 times and of those 1,176,126 are doing work to setup the required property dependency registrations. So at the end of the day the C# with PDFx implementation takes about 55 seconds to load the object model and another 24 seconds to run the first calculation for a grand total of 79 seconds to load the application. Another really bummer part of PDFx and that it’s currently not thread safe so it must run on the UI thread which means that for about 1:20 the application looks like it’s not doing anything. Very bad, very very bad! On top of that each time we change a value via slider on a single StadiumTeamData it takes about 6 seconds to finish calculating. Again blocking the UI thread. A very important detail to note is that when a single StadiumTeamData has an input value change only objects that depend on that StadiumTeamData and objects that depend on those object etc. are recalulated. This means that out of 12,240 StadiumTeamData instances only 1 is being recalculated and only 1 team, 1 division, 1 league, and the top level values of the formula model are being recalculated. We have been trying to improve PDFx performance for some time now, and we have a few more tricks up our sleeves, but most of the tricks are around load time not calculation time.


After listening to a ton of .NET Rocks recently I’ve learned a lot about F#. I was so intrigued that I set out to create an F# implementation of the same formula model we created in C# and PDFx. The implementation took about 32 hours, but that’s also with a ton of research. By the end I think I could have written the entire thing in less than 16 hours which would be less time than the C# and PDFx implementation. I learned that functional programming lends itself to parallelization more than object oriented programming. Due to the fact that functional programming encourages an approach of not modifying values because everything is immutable by default the F# implementation can be run on a background thread as well. The cool part about all of this is the theory that many sub calculations can be run at the same time then aggregate the output to run a final answer calculation. Our current formula model is perfect for this approach. Because we no longer have a dependency on the PDFx to know when a property changes the PropertyChanged event is only raised once to trigger all calculations and is then only triggered once for each property that is updated by the output of the calculations so the UI will be able to respond. The object model takes a bit more than 1 second to load and the first calculation is done in another 2.5 seconds. The total load time is about 3.5 seconds. Compared to 79 seconds that’s 95% faster in F# just for load. Each subsequent calculation when a value changes via slider on a StadiumTeamData takes about 1.2 seconds. Compared to 6 seconds F# is about 80% faster for each calculation. Unlike the C# and PDFx implementation I have not optimized the F# formula model to only calculate the object tree that changed, instead all 12,240 StadiumTeamData instances are being recalculated each time and value changes in the entire object model. So we could still become more performant by only calculating the single StadiumTeamData that changed and the related team, division, league, and then the top level values of the formula model.


A complete breakdown of my comparisons can be found in this excel file. I wanted to call out a few very important results in this post to wrap things up.


I used to think that C# and PDFx was very readable. And while it is for very simple models it can get unwieldy. F# however is the clear winner here. I reduced lines of code by the hundreds. I can see one entire formula in one file which is compact enough to fit on my screen at one time, versus C# and PDFx which takes up multiple files due to multiple classes, and it requires me to do a lot of scolling due to the amount of lines a single property takes up. This seriously increases maintainability.


When it comes to performance C# and PDFx were blown out of the water. Application load time was improved by 95% and calculation time was improved 80%. This is serious business here!

Time to Implement

This is a slightly skewed comparison due to experience. I was impressed by the fact that C# and PDFx took 16-24 hours and F#, a brand new language, took only 32 hours. I am convinced that I can write F# faster than C# using PDFx on future projects.

Next Steps

I will be diligently searching for opportunities to use F# in production client code. It is a no brainer to me. I agree with the statement from many of .NET Rocks podcast guests talking about F# and functional programming, “Every software engineer should learn F#!” It just makes sense!


Source Code: Formula Implementation Proving Ground

C# and PDFx Executable

F# Executable

Formula Excel Workbook

F# vs. C# Comparison Excel File

Getting Started with ASP.NET MVC 4 and Azure

Most of my development experience has been with desktop applications, so I have to admit my knowledge of web development is pretty light.  I’ve had a chance this week to begin digging into ASP.NET MVC 4, along with the many cool features made available by hosting a site in Azure.  Although I’ve maintained a cursory knowledge of both, I hadn’t had a chance to really try them out, and certainly didn’t feel comfortable taking on large projects in either (or even scoping such efforts).  My exploration this week has left me much more at ease with both.

I was able to hit several topics this week:

  • Basic MVC 4 development
  • MVC routing
  • MVC validation
  • Code-first Entity Framework development
  • Code-First EF Migration
  • Using OAuth providers for for login credentials
  • Deployment to an Azure Web Site
  • Use of Azure SQL Database

I was impressed with how straightforward and sensible the MVC 4 approach is.  The use of convention-based coordination of controllers and views makes it extremely simple to quickly set up a functional site.  And the default plumbing created automatically by Visual Studio really helps a lot.  I’m not sure if I’d yet be comfortable coding everything from scratch at this point, but the scaffolding VS puts in place makes it very simple to get started and customize as appropriate.  Some tutorials I found particularly instructive are:

I’ve used Entity Framework quite a bit in the past, but have not had an opportunity to use the Code-First approach extensively.  The second link above was also very instructive in this topic.  Something I think would be particularly useful is Code-First Migrations, which allows you to update your data model as you develop without having to drop your existing database.  Migrations generates methods to step your existing database up to the new model or down to the previous one.  In the past I’ve found modifying my data model once there is an actual production database extremely tedious.  Migrations should help greatly.  I was disappointed that a command-line interface must be used to make use of migrations (with no Intellisense-like auto completion), since I’m sure it will take me a while to remember the correct steps and syntax.  But revisiting the sites I mentioned earlier should make it simple enough.

I was also pretty new to the use of OAuth authentication.  The following link made it very easy to add OAuth to my test app:

Having heard stories of Azure deployment being difficult in the past, I’ve been wary of it until now, but it looks like more recently Microsoft has done a lot to make the process as simple as possible.  I was impressed with how easy it was to move a locally-hosted web site to Azure.  Once you have your Azure account set up, it took almost no time at all to deploy to Azure.  Moving a database to Azure SQL Database was equally as simple.  I’m currently using my free trial on Azure, so I haven’t had to deal with monitoring real usage and being cautious to limit my charges, but I’ve heard this has been improved as well, and developers no longer seem to be incurring unreasonable charges accidentally.  The following links were great for getting started:

Overall, I feel much more comfortable tackling the creation of a web application hosted in Azure.  Microsoft has done an excellent job easing developers into these technologies, and they look like great solutions for well-architected, highly-scalable web solutions.