Art Center College of Design Interaction Design Workshop

Art Center of Desing LogoInteraction Design (IxD) is used for much more than web sites and simple mobile apps. Companies are solving complex problems, and they can use IxD principles to make sure they are building the right thing the first time without writing novels of requirements documents. Rodney Guzman, Chief Architect and founder of InterKnowlogy will lead an interaction design workshop at the Art Center College of Design. Students will explore how InterKnowlogy works with customers to understand the features they need, and how we echo back what is heard. Students will have the opportunity to listen and observe a real interview with a product stakeholder. Afterwards, the students will work as team to write a proposal for the work. Our goal is to give the students a real life scenario of getting from an idea to effectively communicating a solution.

IMSA Mobile Apps – 2 – Planning For Maximizing Code Re-Use Across iOS, Android, and Universal Apps

While we were busy thinking through the interaction design elements of new IMSA mobile apps, we knew we were going to have to build six apps (iOS, Android, Universal Apps for phone and tablet). The app architecture we chose to follow for this is Model-View-ViewModel (or MVVM). Our mission was to maximize the amount of code sharing across all implementations of the app. The more code sharing we could do, the less code we would have to develop for each platform. Less code means less time to develop and less to test, making the aggressive schedule more achievable.

The Model layer contains the business logic and data that will drive the IMSA app. Data would be served through a scalable cloud infrastructure being constructed for the mobile apps. Regardless of mobile OS, the business logic and data will remain the same. How we access the data in the cloud and retrieve would also remain the same. These layers are devoid of any user interface elements are a logical candidate for re-use across all the mobile operating systems. Perfect – one layer to write one. But we want more.

We were suspicious that the View layer would be so unique across mobile operating systems that the ViewModel layer would not be re-usable. The ViewModel layer is responsible for binding the Model layer (the business logic) to a View (the user interface). Remember we are talking about code sharing across iOS, Android, and Universal Apps – this have to be do so different that writing a consistent and shareable ViewModel layer would not be possible – right? Wrong! After some initial prototyping we were pleasantly surprised. The path we have chosen is going to allow us to use the same code in the ViewModel layer across all operating systems.

From our early calculations, thanks to Visual Studio and Xamarin, we are predicting to see about 75% code re-use (of the non-generated code) across all the implementations! Excellent news for the developers and project manager. We’ll dive into code examples in an upcoming blog, but next we’ll discuss our approach with Azure.  Also, this video has additional information for code re-use with Xamarin.

IMSA Mobile Apps – 1 – Architecture Design Session

The IMSA Mobile Apps project is currently in flight and we are actively working on building this cross platform/cross OS solution. This article is the first in a series of blogs discussing the details of the project, and we’ll be actively trying to catch up to the current day as we are busy building towards the Laguna Seca race.

Back in the first week of December 2014 we flew out to Florida to visit the IMSA team with Microsoft in Daytona Beach. Microsoft was hosting an Architecture Design Session, or ADS for short, to flesh out features of the solution. It became quickly apparent that the solution was layered and complex. Many features discussed have become part of longer product roadmap as IMSA is committed to providing the best experience possible to their fans. Also, it should be noted as in all ideation sessions, some ideas discussed are put deep down in the feature backlog.

I am certain that some would ask why IMSA involved Microsoft. This is a mobile app – what does Microsoft know about building mobile apps across iOS and Android? Well, it turns out quite a lot. From past projects, we already knew the tooling we get with Visual Studio and Xamarin allows us to build amazing mobile apps across all platforms and OS’s. The other side of the coin is the plumbing we get to integrate into cloud infrastructure. This app needed to scale across the huge IMSA fan base during live events. From past projects we knew how effective we could be building scalable mobile apps with Azure. So to IMSA and to us, involving Microsoft made perfect sense.

In the ADS, some of the interesting features starting popping up:

The app would need to change shape depending on whether or not a race is live or not. We thought treating the app almost like the NFL Now app would be interesting. There could be something interesting always to watch on our app, regardless if an event is live or not.

IMSA radio is a live audio stream. The app would need to deliver this feed just like other integrated audio content on your device. So turning on IMSA radio, putting your headphones on, and then your device in your pocket should be as natural as playing music.

Using the device’s GPS, if the race fan is at the event the app should respond differently than if the person were elsewhere. When you are at an event, what you are interested in is different than when you are not.

Telemetry information from the cars. It would be just awesome to watch your favorite car at the event or at home and see all the g-forces they are pulling when they are flying around the corners.

IMSA services to content and structured information are not scalability to a mobile play. A cloud infrastructure would need to be placed in front of the IMSA services so content could be cached and served more quickly.

 

After the ADS we went home and decomposed all the features while looking at the schedule. We needed to pick a race event to target deployment. We had a lot of homework to determine our approach. In this next blog we will be discussing how we were planning on maximizing code re-use across all platforms and OS’s.

Kinect 2.0 Face Frame Data with Other Frame Data

Do you have a project that you are using the Kinect 2.0 Face detection as well as one, or more, of the other feeds from the Kinect? Well I am, and I was having issues with obtaining all the Frames I wanted from the Kinect. Let’s start with a brief, highlevel, overview, I had a need to obtain the data relating to the Color Image, the Body Tracking, and the Face Tracking. Seems very straight forward until I realized that the Face Data was not included in the MultiSourceFrameReader class. That reader only provided me the Color and Body frame data. In order to get the Face data I needed to use a FaceFrameReader. Which required me to listen for the arrival of two frame events.

For example, I was doing something like this.

public MainWindowViewModel()
{
	_kinectSensor = KinectSensor.GetDefault();

	const FrameSourceTypes frameTypes = FrameSourceTypes.Color | FrameSourceTypes.Body;
	MultiSourceFrameReader kinectFrameReader = _kinectSensor.OpenMultiSourceFrameReader( frameTypes );

	kinectFrameReader.MultiSourceFrameArrived += OnMultiSourceFrameArrived;

	const FaceFrameFeatures faceFrameFeatures = 
		  FaceFrameFeatures.BoundingBoxInColorSpace
		| FaceFrameFeatures.RotationOrientation
		| FaceFrameFeatures.FaceEngagement;

	faceFrameSource = new FaceFrameSource( _kinectSensor, 0, faceFrameFeatures );
	faceFrameReader = faceFrameSources.OpenReader();
	faceFrameReader.FrameArrived += OnFaceFrameArrived;

	_kinectSensor.IsAvailableChanged += OnSensorAvailableChanged;
	_kinectSensor.Open();

}

private void OnMultiSourceFrameArrived( object sender, MultiSourceFrameArrivedEventArgs e )
{
	//Process Color and Body Frames
}

private void OnFaceFrameArrived( object sender, FaceFrameArrivedEventArgs e )
{
	//Process Face frame
}

In theory, this should not be a problem because the Kinect is firing off all frames at around 30 per second and, I assume, in unison. However, I was running into the issue that if my processing of the Face data took longer then the 30th of a second I had to process, the color or body data could be at a different point in the cycle. What I was seeing was images that appeared to be striped between two frames. Now I understand that this behavior could be linked of various issues that I am not going to dive into in this post. But what I had noticed was the more processing I tried to pack in to the Face Frame arrival handler, the more frequently I saw bad images. It is worth noting that my actual project will process all six of the faces that the Kinect can track, and when having to iterate through more then one face per frame, the bad, striped, images were occurring more then good images. This lead me to my conclusion (and my solution lead me to write this blog.)

I also did not like the above approach because it forced me to process frames in different places, and possibly on different cycles. So when something wasn’t working I had to determine which Frame was the offender, then go to that processing method. No bueno.

In trouble shooting the poor images, I had the thought “I just want the color and frame that the Face frame using.” Confused? I’ll try to explain. Basically, the Kinect Face tracking is using some conglomeration of the basic Kinect Feeds (Color, Depth, Body) to figure out what is a face, and the features of that face. I know this because if a body is not being tracked, a face is not being tracked. The depth is then use to track if the eyes are open or closed and other intricacies of the face. Anyways, back on track, I had a feeling that the Kinect Face Frame had, at least, some link back to the other frames that were used to determine the state of the face for that 30th of a second. That is when I stumbled upon FaceFrame.BodyFrameReference and FaceFrame.ColorFrameReference (FaceFrame.DepthFrameReference also exsits, it’s just not needed for my purposes). From those references you can get the respective frames.

After my epiphany my code turned into:

public MainWindowViewModel()
{
	_kinectSensor = KinectSensor.GetDefault();

	//Multiframe reader stuff was here.
	//It is now gone.

	const FaceFrameFeatures faceFrameFeatures = 
		  FaceFrameFeatures.BoundingBoxInColorSpace
		| FaceFrameFeatures.RotationOrientation
		| FaceFrameFeatures.FaceEngagement;

	faceFrameSource = new FaceFrameSource( _kinectSensor, 0, faceFrameFeatures );
	faceFrameReader = faceFrameSources.OpenReader();
	faceFrameReader.FrameArrived += OnFaceFrameArrived;

	_kinectSensor.IsAvailableChanged += OnSensorAvailableChanged;
	_kinectSensor.Open();

}

private void OnFaceFrameArrived( object sender, FaceFrameArrivedEventArgs e )
{
	...
	var bodyReference = faceFrame.BodyFrameReference;
	BodyFrame bodyFrame = bodyReference.AcquireFrame();

	var colorReference = faceFrame.ColorFrameReference;
	ColorFrame colorFrame = colorReference.AcquireFrame();
	//Process Face frame
	...
}

private void ProcessBodyAndColorFrames( BodyFrame bodyFrame, ColorFrame colorFrame )
{
	//Process Body and Color Frames
	...
}

I still have the processing methods split out similarly to how I had them in the first (bad) way, but with this approach I am a little more confident that the Color and Body Frames and I analyzing are the same ones used by the Face processor. I am also more free to split up the processing as I see fit. The color and body are really only lumped together because that is how I was doing it before. In the future they might be split up and done in parallel, who knows.

And with that, the occurrence of bad images appears to be greatly reduced. At least for now. We will see how long it lasts. I still get some bad frames, but I am at least closer to being able to completely blame the Kinect, or poor USB performance, or something other than me (which is my ultimate goal.)

Not a big thing, just something I could not readily find on the web.

Kinect Development (Face tracking) – Without a Kinect

In a previous post I talked about how you can use Kinect Studio v2 Studio software to “play back” a recorded file that contains Kinect data. Your application will react to the incoming data as if it were coming from a Kinect, enabling you to develop software for a Kinect without actually having the device.

This of course requires that you have a recorded file to playback. Keep reading…

More specifically, Kinect for Windows v2 supports the ability to track not only bodies detected in the camera view, but tracking FACES. Even better, there are a number of properties on the detected face metadata that tell you if the person is:

  • looking away from the camera
  • happy
  • mouth moving
  • wearing glasses
  • …etc…

Here at IK, we have been doing a lot of Kinect work lately. It turns out the Kinect v2 device and driver are super picky when it comes to compatible USB 3 controllers. We have discovered that our laptops (Dell Precision m4800) do not have one of the approved controllers. Through lots of development trial and error, we have narrowed this down to mostly being a problem only with FACE TRACKING (the rest of the Kinect data and functionality seem to work fine).

So … even though I have a Kinect, if I’m working on face tracking, I’m out of luck on my machine in terms of development. However, using the technique described in the previous post, I can play back a Kinect Studio file and test my software just find.

To that end, we have recorded a short segment of a couple of us in view, with and without faces engaged, happy, looking and not, … and posted it here for anyone to use in their Kinect face tracking software. This recording has all the feeds turned on, including RGB, so it’s a HUGE file. Feel free to download it (below) and use it for your Kinect face tracking development.

DOWNLOAD HERE: Kinect recorded file – 2 faces, all feeds. (LARGE: ~4.4GB zipped)

Kinect Recording - 2 faces

Kinect Recording – 2 faces

Hope that helps!

Sideloading Windows Store Apps – When Unlimited Has a Limit

In part 1 and part 2 of this series, I describe how & where to buy a Windows Sideloading Key and then how to configure a machine to sideloading your “store” application.

I did not think there was another part to this story … until I checked my license and it’s number of activations. The license you purchase and use to sideload Windows store applications is supposed to be for an “unlimited number of devices“.

Unlimited Devices

MS Claim of Activations on Unlimited Devices

You can imagine my surprise and frustration when I saw in the Volume License Service Center that I had burned through 7 of 25 activations in the first few days!!

Long story short, after a few emails with the VLSC, they said they set the number of activations on that “UNLIMITED” license to 25 “for Microsoft tracking purposes on how many times the product has been used“. In the event you run out, you can request more activations by contacting the MAK team.

I do NOT want to be in production and getting calls from a customer that can no longer sideload the application because we have reached the maximum number of activations. Sure enough, it took another couple emails, but the MAK team was “happy” to increase the number… to 225. Still not unlimited, but a somewhat large number that I will someday likely have to increase again.

225 Activations

225 Activations

Where I uncovered the answers

      vlserva -at- microsoft.com
      MAKAdd -at- microsoft.com

Kinect Development Without a Kinect

Huh? How can you develop software that integrates with the Microsoft Kinect if you don’t have a physical Kinect? We have a number of Kinect devices around the office, but they’re all in use. I want to test and develop on an application we’re writing, … there is another way.

Enter Kinect Studio v2.0. This application is installed with the Kinect v2.0 SDK, and allows you to record and playback streams from the Kinect device. It’s usually used to debug a repeatable scenario, but we’ve been using it to spread the ability to develop Kinect-enabled applications to engineers that don’t have a physical Kinect device. There are just a couple settings to be aware of to get this to work.

Someone has to record the streams in the first place. They can select which streams (RGB, Depth, IR, Body Index, etc. list of streams shown below) to include in the recording. The recording is captured in an XEF file that can get large quickly depending on what streams are included (on the order of 4GB+ for 1 minute). Obviously, you need to include the streams that you’re looking to work with in the application you’re developing.

Streams to Capture

Choose from many streams to include in the recording

So I have my .XEF file to playback, what next?

  • Open the XEF file in Studio.
  • Go to the PLAY tab
  • IMPORTANT: Select which of the available streams you want playback to contain (see screenshot below)
  • Click the settings gear next to the playback window, and select what output you want to see during playback. This does not affect what you’re application code receives from the Kinect. It controls display in the Studio UI only.
  • Click the Connect to Service button
  • Click PLAY

You should now start getting Kinect events in your application code.

Here’s what my studio UI looks like (with highlights calling out where to change settings).
Hope that helps.

Kinect Studio UI

Kinect Studio UI

What I do When Creating a New Machine for Development

Everyone has a build that makes them happy when it comes to their development machine. I particularly love to use Bootable VHDs. They allow me extreme flexibility in size, OS, and disposability. I know that sounds funny to some, but I burn through a new dev machine almost every 6 months. Lately it’s been ever project so the ease is great. I’m not going to go in to creating Bootable VHDs in this post, but my fellow InterKnowlogist Travis Schilling has covered it in this blog post. The steps work for both Windows 7 and Windows 8. I assume they will not change for Windows 10, but I don’t know.

I’ve followed this process twice in the last 3 months so it’s pretty comprehensive of what I need and do. Please do let me know your thoughts, suggestions, alternatives, and what you do! Cheers!

My Build

Dell Precision M4800, Core i7-4800MQ, 8GB RAM

SSD Size: 500GB

  • HOST OS Partition: 25GB (Windows 8.1 with Windows ADK – No updates)
  • Data Partition: 475GB

Bootable VHDs (Live on Data Partition)

  • DEV: 150GB
  • EXPERIMENTAL: 100GB
  • PRESENTATION: 50GB

Windows 8.1 Update or Higher

NOTE: “Restoration Tools” is a directory on the Data Partition (I always use the drive letter E:\) that contains installers that are required each time a new bootable vhd is created. This way the internet is not required in  order to get the machine up and running. This also significantly reduces down time caused by slow download speeds caused by some manufacturers and software providers.

Restoration Tools->Basics->M4800

  • BIOS not needed (M4800A03.exe)
  • System Tray->Dell Touchpad->Mouse
    • CHECK: Disable Touchpad & Pointstick when USB Mouse is present

Power Settings

Activate Windows

Start Full Windows Update

Control Panel\Appearance and Personalization\Display

  • Right-Click Desktop->Screen Resolution->Display (Back a level)
    • CHECK: Let me choose one scaling level for all my displays
    • ABOVE SELECT: “Smaller – 100%”

Add Printers

Disable Notifications

  • Win+I->Change PC Settings->Search and apps->
  • Search
    • Strict Search
  • Notifications
    • SWITCH “Show app notifications” to Off

Move Libraries (“C:\Users\Danny”, ALL)

  • Right-Click Library->Properties->Location (tab)
    • Input new shared directory location for library to share with other VHDs
      • (i.e. E:\Desktop)
    • Click Move
    • Accept move all Items

Taskbar and Navigation Properties

  • Right-Click Taskbar->Properties->Jump List
    • Set to 30 items

Add Toolbar to Taskbar for Recycle Bin

  • Right-Click Taskbar->Toolbars->New Toolbar
    • Select Folder with Shortcut to Recycle Bin (E:\Toolbar)

Remove Recycle Bin from Desktop

  • Right-Click Desktop->Personalize->Change Desktop Icons
    • UNCHECK: Recycle Bin

Finish Full Windows Update (Wait – Continuing before Windows is fully updated may cause instability in Windows. It’s best to fully patch Windows and then continue.)

Business Environment Install (Unless noted all apps can be found in Restoration Tools)

  • Office 2013
    • Setup Outlook and Lync
      • Outlook
        • File->Options->Reading Pane->Uncheck: Mark item as read when selection changes
    • OneNote
      • Can’t Open Hyperlinks: http://support2.microsoft.com/default.aspx?scid=kb;en-us;310049

Restoration Tools->Basics->Misc

  • 7zip
  • Notepad++
  • Cubby
  • Paint.NET
  • TreeSize (? Better app for purpose ?)
  • Chrome
    • Sign in to Chrome and all extensions
  • Camtasia
  • Snagit
  • ZoomIt
  • Skype
    • IM & SMS Settings->IM Settings-> Show Advanced Settings
      • Select: paste message as plain text
  • VLC
  • TweetDeck
  • http://baremetalsoft.com/baretail/
    • Restoration Tools->NoInstall
      • Enable Search: Pin to Start

Set Default File Type Associations

  • All associations for Notepad should be changed to Notepad++
  • Search “Default”->Default Programs->Associate a file type or protocol with a program

Pin Chrome Applications (Pandora, Wunderlist)

  • From Pandora (Pick favorite Station)
    • Click Hamburger Icon->More Tools->Create application shortcuts
      • ONLY CHECK: Taskbar

Enable IIS

  • .NET 3.5
  • .NET 4.5
  • IIS 6 Compatibility
  • Other settings as desired

Full Windows Update (Wait – Again, to prevent instability fully patch Windows at this time.)

Dev Environment Install – Do in Order (Unless noted all apps can be found in Restoration Tools)

  • SQL Server 2012 Dev Edition (MSDN)
  • Visual Studio 2010 (MSDN)
  • Visual Studio 2013 (MSDN)
    • Disable Start Page (Check box at bottom of page)
    • Perform all updates
    • Update Snippets Directory
    • Options->
      • Documents->
        • Check: Auto-load changes, if saved
      • Startup->
        • Show empty environment
        • Uncheck: Download content
      • Tabs and Windows->
        • Uncheck: Allow new files to be opened in the preview tab
    • In Debug Mode->
      • Output Window
      • Solution Explorer
      • Team Explorer
      • Changes
      • Pending changes
  • Resharper
    • Use VS Intellisense
  • Xamarin
  • Telerik
  • Beyond Compare
    • Setup VS (Optional)
  • Xaml Spy
  • Snoop
  • Kaxaml
    • Don’t use as default .xaml file opener
  • MongoDB
    • Still working to figure this one out…

Taskbar Icon Order (Enable Win + [#], ex: Win + 1 launches IE):

  1. IE
  2. Pandora
  3. Chrome
  4. File Explorer
  5. VS13 (Always in Admin Mode)
  6. SQL12
  7. Wunderlist
  8. Excel
  9. Snoop (Always in Admin Mode)

WPF Round Table Part 2: Multi UI Threaded Control – Fixes

Introduction

Click here to download code and sample project – Fixed

Here are the past articles in the WPF Round Table Series:

In my last post I discussed a control I made that allowed for a user to create inline XAML on different UI threads. Today, I am going to discuss a couple of the pitfalls I ran into when attempting to resolve an issue a user asked about

FrameworkTemplates

So, a user, rajesh, asked about how to solve a particular issue with having 4 controls with busy indicators loading all on separate threads. As I was attempting to construct a solution my first instinct was to simply use the ThreadSeparatedStyle property and set the Style’s Template property with the look you want, sort of like this:

<Style TargetType="{x:Type Control}">
	<Setter Property="Template">
		<Setter.Value>
			<ControlTemplate TargetType="{x:Type Control}">
				<multi:BusyIndicator IsBusy="True">
					<Border Background="#66000000">
						<TextBlock Text="Random Text" HorizontalAlignment="Center" VerticalAlignment="Center"/>
					</Border>
				</multi:BusyIndicator>
			</ControlTemplate>
		</Setter.Value>
	</Setter>
</Style>

Suddenly, I was hit with a UI thread access exception when attempting to do this. The problem arises from how WPF allows users to design FrameworkTemplates. WPF instantiates the templates immediately, which will cause threading issues when attempting to access this setter value on our separate UI thread. The key to solving this is by deconstructing the template into a thread safe string by using XAML serialization. First we will grab any FrameworkTemplates from the style:

var templateDict = new Dictionary<DependencyProperty, string>();
foreach ( var setterBase in setters )
{
	var setter = (Setter)setterBase;

	var oldTemp = setter.Value as FrameworkTemplate;

	// templates are instantiated on the thread its defined in, this may cause UI thread access issues
	// we need to deconstruct the template as a string so it can be accessed on our other thread
	if ( oldTemp != null && !templateDict.ContainsKey( setter.Property ) )
	{
		var templateString = XamlWriter.Save( oldTemp );
		templateDict.Add( setter.Property, templateString );
	}
}

Then, while recreating our Style on the newly created UI thread, we reconstruct the template:

foreach ( var setterBase in setters )
{
	var setter = (Setter)setterBase;

	// now that we are on our new UI thread, we can reconstruct the template
	string templateString;
	if ( templateDict.TryGetValue( setter.Property, out templateString ) )
	{
		var reader = new StringReader( templateString );
		var xmlReader = XmlReader.Create( reader );
		var template = XamlReader.Load( xmlReader );
		setter = new Setter( setter.Property, template );
	}

	newStyle.Setters.Add( setter );
}

Now we are able to design our UI thread separated control inline our main Xaml and also any FrameworkTemplates that are defined within.

XAML Serialization Limitations

I actually ran into another error when attempting to insert my custom UserControl into the UI thread separated Style’s template. It involved a ResourceDictionary duplicate key error. This problem absolutely dumbfounded me; not only in trying to understand why the same resource would try to be defined twice, but also how can there be duplicates on a newly created UI thread. After racking my head for hours to come up with a work around solution I eventually found out the direct cause of the error in question. It had to do with how the XamlWriter class serializes the given XAML tree. To give you an idea let’s say we have our ThreadSeparatedStyle defined like this:

<Style TargetType="{x:Type Control}">
	<Setter Property="Template">
		<Setter.Value>
			<ControlTemplate TargetType="{x:Type Control}">
				<Border Width="100" Height="50" VerticalAlignment="Bottom">
					<Border.Resources>
						<converters:ColorValueConverter x:Key="ColorValueConverter"/>
					</Border.Resources>
					<Border.Background>
						<SolidColorBrush Color="{Binding Source='Black', Converter={StaticResource ColorValueConverter}}"/>
					</Border.Background>
					<TextBlock Text="Random Text" HorizontalAlignment="Center" VerticalAlignment="Center" Foreground="White"/>
				</Border>
			</ControlTemplate>
		</Setter.Value>
	</Setter>
</Style>

When Xaml.Save attempts to serialize the ControlTemplate here is our string result:

<ControlTemplate TargetType="Control" 
				 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
				 xmlns:cpc="clr-namespace:Core.Presentation.Converters;assembly=Core" 
				 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
	<Border Width="100" Height="50" VerticalAlignment="Bottom">
		<Border.Background>
			<SolidColorBrush Color="#FF000000" />
		</Border.Background>
		<Border.Resources>
			<cpc:ColorValueConverter x:Key="ColorValueConverter" />
		</Border.Resources>
		<TextBlock Text="Random Text" Foreground="#FFFFFFFF" HorizontalAlignment="Center" VerticalAlignment="Center" />
	</Border>
</ControlTemplate>

Now, if we decided to wrap this into a UserControl, called RandomTextUserControl, it may look like this:

<UserControl x:Class="MultiUiThreadedExample.RandomTextUserControl"
             xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
             xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
             xmlns:converters="clr-namespace:Core.Presentation.Converters;assembly=Core">
	<UserControl.Resources>
		<converters:ColorValueConverter x:Key="ColorValueConverter"/>
	</UserControl.Resources>
	<Border Width="100" Height="50" VerticalAlignment="Bottom">
		<Border.Background>
			<SolidColorBrush Color="{Binding Source='Black', Converter={StaticResource ColorValueConverter}}"/>
		</Border.Background>
		<TextBlock Text="Random Text" HorizontalAlignment="Center" VerticalAlignment="Center" Foreground="White"/>
	</Border>
</UserControl>

When we replace our current XAML with this control we will receive the ResourceDictionary XamlParseException because it is trying to include ‘ColorValueConverter’ more than once. If we go back to our Xaml.Save result we will find our culprit:

<ControlTemplate TargetType="Control"
                 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
                 xmlns:mute="clr-namespace:MultiUiThreadedExample;assembly=MultiUiThreadedExample"
                 xmlns:cpc="clr-namespace:Core.Presentation.Converters;assembly=Core"
                 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
	<mute:RandomTextUserControl>
		<mute:RandomTextUserControl.Resources>
			<cpc:ColorValueConverter x:Key="ColorValueConverter" />
		</mute:RandomTextUserControl.Resources>
		<Border Width="100" Height="50" VerticalAlignment="Bottom">
			<Border.Background>
				<SolidColorBrush Color="#FF000000" />
			</Border.Background>
			<TextBlock Text="Random Text" Foreground="#FFFFFFFF" HorizontalAlignment="Center" VerticalAlignment="Center" />
		</Border>
	</mute:RandomTextUserControl>
</ControlTemplate>

As you can see, XamlWriter.Save is actually including our parent level resources from RandomTextUserControl. This will cause duplication issue since it will attempt to add the resources displayed here plus the ones already defined inside RandomTextUserControl. The reason is because XamlWriter tries to keep the result self-contained. Meaning, the final result will be a single page XAML tree. Unfortunately, the process tends to add any referenced resources that may come from the overall application. This limitation, along with others, are actually documented by Microsoft. So, the solution here is to either put all your resources into the first content elements resources property or define the design of your control using a template, like this:

<UserControl x:Class="MultiUiThreadedExample.RandomTextUserControl"
             xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
             xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
             xmlns:converters="clr-namespace:Core.Presentation.Converters;assembly=Core"
             xmlns:multiUiThreadedExample="clr-namespace:MultiUiThreadedExample">
	<UserControl.Template>
		<ControlTemplate TargetType="{x:Type multiUiThreadedExample:RandomTextUserControl}">
			<ControlTemplate.Resources>
				<converters:ColorValueConverter x:Key="ColorValueConverter"/>
			</ControlTemplate.Resources>
			<Border Width="100" Height="50" VerticalAlignment="Bottom">
				<Border.Background>
					<SolidColorBrush Color="{Binding Source='Black', Converter={StaticResource ColorValueConverter}}"/>
				</Border.Background>
				<TextBlock Text="Random Text" HorizontalAlignment="Center" VerticalAlignment="Center" Foreground="White"/>
			</Border>
		</ControlTemplate>
	</UserControl.Template>
</UserControl>

I actually prefer this method since it reduces an unnecessary ContentPresenter from being created and allows for more seamless TemplateBinding with the parent and Triggering.

Sideloading Windows Store Apps – Install and Configure the Key

In a previous post, I described the process of obtaining a Microsoft key to use for Windows Store apps that are sideloaded (not obtained or installed via the store).  We have taken this approach most recently with the “Magic Wall” software that we built for CNN. Now that you have the key, let’s configure a machine with that key to install and run the sideloaded application.

I was surprised to see that there is nothing to do to the application itself to enable it for sideloading.  You don’t embed your key in the app – it’s completely stand-alone.  This kind of makes sense and has a huge benefit of allowing you to use the same sideloading key for any application, even if it wasn’t originally intended to be sideloaded.  You DO still have to sign your application with a code-signing certificate.  Let’s take care of that first. 

Sign the App With Code Signing Certificate

In your WinRT application project manifest, Packaging tab, use the button to “Choose Certificate…”.  Point to your code signing cert, provide your password, and you’re good.

Sign the application

Sign the application

Now build your app, and create the app package.  You only need two files from the directory of files created by the app package tool: 

  • the .appx (application and resources bundled for installation)
  • the .appxsym (debug symbols, useful for digging through crash dumps, etc)

The appx is still not signed, it’s just built with the certificate.  Now let’s sign it.  Open a command prompt with administrative privileges, and run the following command, providing the path to the certificate and the certificate password.

SignTool sign /fd SHA256 /a /f {PathToCertificate} /p {Password} {PathToAppx}

Install Sideloading Key

Next you have to configure the machine where you want to sideload the application.  You only have to do this once for each machine, and then you can sideload any applications on it.  Again, the key is not tied to the application.  You can easily find this info online, but here it is again for reference.

From an administrative command prompt:

The command below installs the sideloading key on the machine.  Use the key that you got from the Volume License Center key manager.  You should see a success message when it completes.

slmgr /ipk {your sideloading key without curly braces}

Then run the next command, which “activates” the sideloading key.  You must be connected to the internet to run this command, as it will connect with the Microsoft licensing servers to verify the key.  Unlike the GUID above, the GUID used below is not specific to your sideloading key.  Everyone should use this same GUID. You should see a success message when it completes.

slmgr /ato ec67814b-30e6-4a50-bf7b-d55daf729d1e

Allow Trusted Applications to Install

Next, a simple registry entry allows the OS to install trusted applications (those that are signed).   Add the following key and value to the registry.  You should add the “Appx” key if it doesn’t already exist.

HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\Appx\AllowAllTrustedApps = 1 (DWORD)

Install the Application

Finally, you install the application using PowerShell. Copy the .appx and .appxsym to the target machine where you have enabled sideloading from above. From a PowerShell command prompt, use the following command.

Add-AppxPackage {PathToAppx}

Now you can find the installed application on the start screen list of all apps, or through search. Pin it to the start screen or run it from there.

That’s it.  Hope that works for you.