About the Author

Dan Hanan is a lead software engineer at InterKnowlogy, where he works on a wide variety of software projects related to the Microsoft tech stack. At the center of his universe are .NET, C#, XAML-based technologies such as WPF, Silverlight, Win Phone 7, and Surface, as well as non-UI related bits involving WCF, LINQ, and SQL. In his spare tech-time, he is venturing outside the MS world into the depths of the Android OS. Come back once in a while to check out Dan's random collection of technology thoughts....better yet, subscribe to the RSS feed!

IoT: Ping Pong Scoring via Netduino

Combining InterKnowlogy’s thirst for using the latest and greatest technology with our world famous ping pong skills provides the following result:  during RECESS, I am making a small device that allows us to quickly keep a digital score of our ping pong matches.

Internet of Things (IoT) is a hot topic these days, so I decided to implement the ping pong scoring system on a Netduino board.  I had dabbled with an older board a year or more ago, and was frustrated: one of the first things I wanted to do was make a call to a web API service, but there was no network connectivity.  Enter the newest board, the “Netduino 3 WiFi“.  It has a built-in button and LED, but it’s extensible by way of the 3 GOBUS ports, where you can easily hookup external modules.

Hardware setup

Netduino 3 board with Gobus modules

My shopping list

Required Software

Netduino is a derivative of the Arduino board, with the ability to run .NET Micro Framework (NET MF) code.  This means you get to write your “application” for the board using familiar tools like Visual Studio and .NET.  Here are the steps I went through (loosely following this forum post, but updated to current day):

Network Configuration

This board has WiFi (sweet!), which means you need to get it on your wireless network before you go much further.

Use the .NET Micro Framework Deployment Tool (MFDEPLOY) to configure WiFi on the board.

  • Target, Connect
  • Target, Configuration, Network
  • Set network SSID, encryption settings, network credentials, etc.
  • Reboot the device to take on the new settings!
  • GREEN LIGHT means you’re successfully connected to the network (yellow means it’s searching for a network)

Write Some Code!

After installing the VS plug-in, you now have a new project template.

File, New Project, Micro Framework – Netduino Application (Universal)

Go to the project properties and confirm two things:

  • Application, Target Framework = .NET MF 4.3
  • .NET Micro Framework, Deployment.  Transport = USB, Device = (your device)

On-board light

static OutputPort led = new OutputPort( Pins.ONBOARD_LED, false );
led.Write( true );

On-board button

NOTE: There is a bug with the on-board button in the current Netduino 3 board firmware. While your application is running, pressing the on-board button will cause a reset of the device, not a button press in your application. The work-around until the next version of the firmware is to reference the pin number explicitly, instead of using Pins.ONBOARD_BTN. See my forum post for more information.

static InputPort button = new InputPort( (Cpu.Pin)0x15, false, Port.ResistorMode.Disabled );

GO Button

Now attach a GOBUS button module and the code is a little different.  The Netduino SDK provides classes specific to each module that you use instead of general input / output port classes.

The natural way in .NET to react to button presses is to wire up an event handler.  The GoButton class has such the ButtonPressed event, BUT, there’s a bug in the firmware and SDK:  If you react to a ButtonPressed event and in that handler method (or anywhere in that call stack), you make a call on the network, the call will hang indefinitely.  I discuss this and the work around with others in the a Netduino forum post.

It’s kind of ugly, but instead of wiring up to the events, for now (until the Netduino folks get it fixed), you just sample the IsPressed state of the button in a loop.

Add a reference to Netduino.GoButton.

var goButton = new NetduinoGo.Button();
if ( goButton.IsPressed ) { // do something }

Go Buzzer

Add a reference to Netduino.PiezoBuzzer.

var buzzer = new NetduinoGo.PiezoBuzzer();
buzzer.SetFrequency(noteFrequency);

Talk to the Web!

You bought this board because it has WiFi, so you must want to call a web API or something similar.  In my case, I wrote a simple OWIN based Web API service, hosted in a WPF app that is my ping pong scoreboard display.  This gives me the ability to receive HTTP calls from the Netduino board & client code, straight into the WPF application.

So a call from the Netduino application code to something like http://1.2.3.4:9999/api/Scoring/Increment/1 will give player 1 a point!

I do this using the HttpWebRequest and related classes from the .NET MF.

// error handling code removed for brevity...
var req = WebRequest.Create( url );
req.Timeout = 2000;
using ( var resp = req.GetResponse() )
{
	var httpResp = resp as HttpWebResponse;
	using ( Stream strm = httpResp.GetResponseStream() )
	{
		using ( var rdr = new StreamReader( strm ) )
		{
			string content = rdr.ReadToEnd();
			return content;
		}
	}
}

In my case, the results from my API calls come back as JSON, so I’m using the .NET Micro Framework JSON Serializer and Deserializer (Json.NetMF nuget package).

var result = Json.NETMF.JsonSerializer.DeserializeString( responseText ) as Hashtable;
if ( result != null )
{
    Debug.Print( "Score updated: " + result["Player1Score"] + "-" + result["Player2Score"] );
}

Putting that all together, I have a couple physical buttons I can press, one for each player, and a WPF based scoreboard on the wall that removes any confusion about the score!

Hope you too are having fun with IoT!

 

Kinect Development (Face tracking) – Without a Kinect

In a previous post I talked about how you can use Kinect Studio v2 Studio software to “play back” a recorded file that contains Kinect data. Your application will react to the incoming data as if it were coming from a Kinect, enabling you to develop software for a Kinect without actually having the device.

This of course requires that you have a recorded file to playback. Keep reading…

More specifically, Kinect for Windows v2 supports the ability to track not only bodies detected in the camera view, but tracking FACES. Even better, there are a number of properties on the detected face metadata that tell you if the person is:

  • looking away from the camera
  • happy
  • mouth moving
  • wearing glasses
  • …etc…

Here at IK, we have been doing a lot of Kinect work lately. It turns out the Kinect v2 device and driver are super picky when it comes to compatible USB 3 controllers. We have discovered that our laptops (Dell Precision m4800) do not have one of the approved controllers. Through lots of development trial and error, we have narrowed this down to mostly being a problem only with FACE TRACKING (the rest of the Kinect data and functionality seem to work fine).

So … even though I have a Kinect, if I’m working on face tracking, I’m out of luck on my machine in terms of development. However, using the technique described in the previous post, I can play back a Kinect Studio file and test my software just find.

To that end, we have recorded a short segment of a couple of us in view, with and without faces engaged, happy, looking and not, … and posted it here for anyone to use in their Kinect face tracking software. This recording has all the feeds turned on, including RGB, so it’s a HUGE file. Feel free to download it (below) and use it for your Kinect face tracking development.

DOWNLOAD HERE: Kinect recorded file – 2 faces, all feeds. (LARGE: ~4.4GB zipped)

Kinect Recording - 2 faces

Kinect Recording – 2 faces

Hope that helps!

Sideloading Windows Store Apps – When Unlimited Has a Limit

In part 1 and part 2 of this series, I describe how & where to buy a Windows Sideloading Key and then how to configure a machine to sideloading your “store” application.

I did not think there was another part to this story … until I checked my license and it’s number of activations. The license you purchase and use to sideload Windows store applications is supposed to be for an “unlimited number of devices“.

Unlimited Devices

MS Claim of Activations on Unlimited Devices

You can imagine my surprise and frustration when I saw in the Volume License Service Center that I had burned through 7 of 25 activations in the first few days!!

Long story short, after a few emails with the VLSC, they said they set the number of activations on that “UNLIMITED” license to 25 “for Microsoft tracking purposes on how many times the product has been used“. In the event you run out, you can request more activations by contacting the MAK team.

I do NOT want to be in production and getting calls from a customer that can no longer sideload the application because we have reached the maximum number of activations. Sure enough, it took another couple emails, but the MAK team was “happy” to increase the number… to 225. Still not unlimited, but a somewhat large number that I will someday likely have to increase again.

225 Activations

225 Activations

Where I uncovered the answers

      vlserva -at- microsoft.com
      MAKAdd -at- microsoft.com

Kinect Development Without a Kinect

Huh? How can you develop software that integrates with the Microsoft Kinect if you don’t have a physical Kinect? We have a number of Kinect devices around the office, but they’re all in use. I want to test and develop on an application we’re writing, … there is another way.

Enter Kinect Studio v2.0. This application is installed with the Kinect v2.0 SDK, and allows you to record and playback streams from the Kinect device. It’s usually used to debug a repeatable scenario, but we’ve been using it to spread the ability to develop Kinect-enabled applications to engineers that don’t have a physical Kinect device. There are just a couple settings to be aware of to get this to work.

Someone has to record the streams in the first place. They can select which streams (RGB, Depth, IR, Body Index, etc. list of streams shown below) to include in the recording. The recording is captured in an XEF file that can get large quickly depending on what streams are included (on the order of 4GB+ for 1 minute). Obviously, you need to include the streams that you’re looking to work with in the application you’re developing.

Streams to Capture

Choose from many streams to include in the recording

So I have my .XEF file to playback, what next?

  • Open the XEF file in Studio.
  • Go to the PLAY tab
  • IMPORTANT: Select which of the available streams you want playback to contain (see screenshot below)
  • Click the settings gear next to the playback window, and select what output you want to see during playback. This does not affect what you’re application code receives from the Kinect. It controls display in the Studio UI only.
  • Click the Connect to Service button
  • Click PLAY

You should now start getting Kinect events in your application code.

Here’s what my studio UI looks like (with highlights calling out where to change settings).
Hope that helps.

Kinect Studio UI

Kinect Studio UI

Sideloading Windows Store Apps – Install and Configure the Key

In a previous post, I described the process of obtaining a Microsoft key to use for Windows Store apps that are sideloaded (not obtained or installed via the store).  We have taken this approach most recently with the “Magic Wall” software that we built for CNN. Now that you have the key, let’s configure a machine with that key to install and run the sideloaded application.

I was surprised to see that there is nothing to do to the application itself to enable it for sideloading.  You don’t embed your key in the app – it’s completely stand-alone.  This kind of makes sense and has a huge benefit of allowing you to use the same sideloading key for any application, even if it wasn’t originally intended to be sideloaded.  You DO still have to sign your application with a code-signing certificate.  Let’s take care of that first. 

Sign the App With Code Signing Certificate

In your WinRT application project manifest, Packaging tab, use the button to “Choose Certificate…”.  Point to your code signing cert, provide your password, and you’re good.

Sign the application

Sign the application

Now build your app, and create the app package.  You only need two files from the directory of files created by the app package tool: 

  • the .appx (application and resources bundled for installation)
  • the .appxsym (debug symbols, useful for digging through crash dumps, etc)

The appx is still not signed, it’s just built with the certificate.  Now let’s sign it.  Open a command prompt with administrative privileges, and run the following command, providing the path to the certificate and the certificate password.

SignTool sign /fd SHA256 /a /f {PathToCertificate} /p {Password} {PathToAppx}

Install Sideloading Key

Next you have to configure the machine where you want to sideload the application.  You only have to do this once for each machine, and then you can sideload any applications on it.  Again, the key is not tied to the application.  You can easily find this info online, but here it is again for reference.

From an administrative command prompt:

The command below installs the sideloading key on the machine.  Use the key that you got from the Volume License Center key manager.  You should see a success message when it completes.

slmgr /ipk {your sideloading key without curly braces}

Then run the next command, which “activates” the sideloading key.  You must be connected to the internet to run this command, as it will connect with the Microsoft licensing servers to verify the key.  Unlike the GUID above, the GUID used below is not specific to your sideloading key.  Everyone should use this same GUID. You should see a success message when it completes.

slmgr /ato ec67814b-30e6-4a50-bf7b-d55daf729d1e

Allow Trusted Applications to Install

Next, a simple registry entry allows the OS to install trusted applications (those that are signed).   Add the following key and value to the registry.  You should add the “Appx” key if it doesn’t already exist.

HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\Appx\AllowAllTrustedApps = 1 (DWORD)

Install the Application

Finally, you install the application using PowerShell. Copy the .appx and .appxsym to the target machine where you have enabled sideloading from above. From a PowerShell command prompt, use the following command.

Add-AppxPackage {PathToAppx}

Now you can find the installed application on the start screen list of all apps, or through search. Pin it to the start screen or run it from there.

That’s it.  Hope that works for you.

Sideloading Windows Store Apps – Purchase the Key

Back in April, Microsoft announced that it was making it much easier to obtain a sideloading key for deploying “internal” line of business Windows Store applications. Until then, it was ridiculously prohibitive to acquire a key, so the sideloading story was crippled.  The above link (and this one) has the details, but suffice it to say that you are now able to get a sideloading key for $100. Sounds easy, right?

I set out to buy a key for us to use at InterKnowlogy, but … I searched high and low for information on WHERE to buy such a key. We get our volume licensing via our Microsoft Gold Partnership, and that’s not one of the qualifying methods for already having a sideloading key.  WHERE can I buy the key?

After many calls, I find that the Microsoft Volume License Service Center does not sell it, but instead recommends a volume license re-seller.  (I’m not trying to buy a volume license, just a single license for unlimited sideloading.)  I assume there are lots of volume license re-sellers, but that I ended up with was Software House International (SHI).

LONG story short:  this key is being offered as part of the Open License Program, which allows you to setup an account even though you haven’t or won’t be buying LOTS of (volume) licenses.

Setup the account, purchase the “Windows Sideloading Rights” license (QTY 1), part #4UN-00005.

No good.  You must buy at least 5 items to qualify for a “volume license”.  WHAT?  I only need a single license, that gives me UNLIMITED sideloads.  Why would I need more than one?

The fix (salesman’s idea): find the cheapest thing in their catalog and buy 4 of them:  “Microsoft DVD Playback Pack for Windows Vista Business” (QTY 4).  $4.50 each!!

Make the purchase, $111.58, and now I have some sweet DVD playback software to give away to developers as prizes!  :)  Download the key, and next blog post, I’ll show you how to use the key to sideload.

Really cool that Microsoft made it cheap to get a sideloading license, but the mechanics of the process (at least to purchase) are still pretty wonky.

(We have taken this approach most recently with the “Magic Wall” software that we built for CNN.)

WinRT and “Fun” (ok, PAIN) With a Grouped GridView

I’ve seen examples of the new WinRT GridView in a bunch of the samples and demos from Microsoft, but had never used the control for my own app before.  During the past couple weeks of RECESS, I’ve been writing a Ping Pong results tracking app with a couple co-workers, and set out to use the GridView in grouped mode.  (Yes, PING PONG – we got a new table at the office a few months ago, and the competition is getting intense. We need to start recording these results!)

As with almost all demos from Microsoft, the app uses some not-so-real-world techniques: manipulating UI from code-behind, and creating the data source from in-memory data stores.  I’m using an MVVM design and getting real data from a real (MongoDB backed) web service.  This is just a quick post on what did and did not work for me when learning the GridView via trial and error.

For all the code below, I’m using a simple hierarchical data structure:  Teams that contain a collection Players.

Bind GridView Directly to a VM Collection – Does Not Work

    Teams = DataModel.GetTeams();
    <GridView ItemsSource="{Binding Path=Teams}"
              Margin="5"
              >

This works (well, it displays the outermost collection of objects, but doesn’t do any grouping).  For the GridView to use grouped data, you must provide a CollectionViewSource with IsSourceGrouped set to true.  I can’t find anywhere that you can tell the GridView directly that it’s ItemsSource contains grouped data.

Create CollectionViewSource Manually – Does Not Work

I’ve seen the CollectionViewSource created as a resource in the UI View in all the samples, but since I’m using MVVM, I want to bind to VM properties. I try to create and store a CollectionViewSource in the VM and bind straight to it.

    Teams = DataModel.GetTeams();
	
    CVS = new CollectionViewSource
    {
        Source = Teams,
        IsSourceGrouped = true,
    };
    <GridView ItemsSource="{Binding Path=CVS}"
              Margin="50"
            >

I get an ArgumentException in the CollectionViewSource property setter (via the NotifyPropertyChanged event).  This occurs when the GridView that is bound to that CVS doesn’t like what it sees.

Bind Resource to VM Property – Works

Here’s a compromise, I never realized you could do this.  Declare the CollectionViewSource in XAML (set IsSourceGrouped), but bind it’s source to the VM property. Notice I’m also able to set ItemsPath to describe what property in the parent type holds the children.

    <Page.Resources>
        <CollectionViewSource x:Key="GroupedData" 
                              IsSourceGrouped="True"
                              Source="{Binding Teams}"
                              ItemsPath="Players"
                              />
    </Page.Resources>
	
    <GridView ItemsSource="{Binding Source={StaticResource GroupedData}}"
                Margin="50"
                SelectionMode="Single"
                >
    Teams = DataModel.GetTeams();

Use a LINQ Group By Directly – Works

All the samples I’ve seen use a LINQ query to fetch grouped results, and then push those results into a custom data structure.  I originally thought you had to do this for the GridView to understand the grouping, but it’s not required. I see a bunch of discussions where the collection must implement a certain interface, or where you must write your own wrapper classes that derive from IGrouping, etc. but those don’t seem to be required either.

Here I’m setting my grouped LINQ query as the property that the GridView will bind to (again via the CollectionViewSource resource).  Notice a couple things:

  • I’m unnecessarily using LINQ to group my results even though my data is already in a hierarchy (this is just to prove a point)
  • You can project the LINQ results into either concrete types or anonymous types. The GridView, as with any other WinRT UI element, is happy to bind to anonymous types (I remember this was a problem in early versions of WinRT, but seems to be fixed now)
  • The LINQ group container is a string/collection dictionary (Team.Name to Teams in my case), so I cheat a little and call .First() to get to the one and only Team in each group, since I know I have unique team names.
    Teams = DataModel.GetTeams();
    GroupedLinq = from team in Teams
                    group team by team.Name
                        into g
                        //select new Team
                        //{
                        //	Name = g.Key,
                        //	Players = new List<Player>( g.First().Players )
                        //};
                        select new
                        {
                            Name = g.Key,
                            Players = g.First().Players
                        };
    <CollectionViewSource x:Key="GroupedLinqData" 
                            IsSourceGrouped="True"
                            Source="{Binding GroupedLinq}"
                            ItemsPath="Players"
                            />

    <GridView ItemsSource="{Binding Source={StaticResource GroupedLinqData}}"
                Margin="50"
                SelectionMode="Single"
                >
        <GridView.ItemTemplate>
            <DataTemplate>
                <TextBlock Text="{Binding Path=Name}"
                            Width="200"
                            Margin="5"
                            />
            </DataTemplate>
        </GridView.ItemTemplate>

        <GridView.GroupStyle>
            <GroupStyle>
                <GroupStyle.HeaderTemplate>
                    <DataTemplate>
                        <TextBlock Text="{Binding Path=Name}"
                                    FontSize="36"
                                    />
                    </DataTemplate>
                </GroupStyle.HeaderTemplate>

                <GroupStyle.Panel>
                    <ItemsPanelTemplate>
                        <VariableSizedWrapGrid Orientation="Vertical" Height="200" />
                    </ItemsPanelTemplate>
                </GroupStyle.Panel>
            </GroupStyle>
        </GridView.GroupStyle>

    </GridView>

Summary

I hope these examples help you get through the learning curve of the GridView faster than I did.

 
 

Migrating TFS Content to the Cloud

We are looking into using the Microsoft’s hosted Team Foundation Service, and the first question that always comes up is “I have an existing TFS project, how do I get my source (and work items) up there as a starting point?”  So I decided to look into it, and found the TFS Integration Platform written by the ALM Rangers at Microsoft.  MSDN Magazine has a good article to use for background and to get you started, but I had to do some custom tweaks to get mine to work.

First off – let’s set the goal:  migrate source code (including history) and Work Items from an on-premise TFS to the Microsoft hosted TFS.  I’m starting with a brand new Team Project in the hosted TFS, so if you merge code into an existing project or branch, your mileage may vary.

Second – we use a custom process template (modified from SCRUM) here at IK, and we are going to use the Agile process template online, so that adds a little complexity to the migration, as I have to map fields from the local process template to the hosted one.

Preparation

Create a work item query in the source TFS that includes the work items you want to migrate.  In my case, I have SPRINT types in the local TFS, but those don’t exist in the destination Agile process template, so my query includes only Product Backlog Items (that we’ll map to User Stories), Tasks, and Bugs.

Integration Tool

Fire up the tool, and click Create New. Select the “Team Foundation Server\VersionControlAndWorkItemTracking.xml” template.  This gives you all the configuration settings you need for migrating both source control and work items.

Configure the source (left side) and destination (right side) connection information for each of Version Control and Work Item Tracking.  I used VS 11 (TFS 2012) for both my local and hosted TFS connections.

Choosing Source and Destination

Source Control

By default, the source control session will migrate from the root of the source tree in the local repository to the root of the source tree in the remote one.  This caused a bunch of conflict errors for me, since even the newly created Team Project in the hosted TFS has some build template files in the BuildProcessTemplates folder.  Since the source and the destination both have the same files, there are conflicts during the merge.  You can work around them by specifying which version to use, but I avoid the hassle by filtering the source that’s migrated to a sibling tree that does not include the BuildProcessTemplates folder.  The tool seems to provide a way to cloak a directory such as the BuildProcessTemplates folder, but I could not get that to work.

So for both source and destination, I choose the source tree one level below the root.  Click the “…” button next to each side of the PATHS section to point to the correct source directory.  My config now looks like this:

Filter Source Paths

Work Items

Now we’re going to limit the migration to only the work items that are returned by the query we created above.  Click the “…” button next to the edit box in the “Queries” section and navigate to the query, mine is called “Migrate these Work Items” (again, it only includes Product Backlog Items, Tasks, and Bugs; not Sprints). 

Choose Filter Query

Notice when you come back from choosing the query, the actual CONTENT of the query is inserted into the edit box, instead of just a reference to the query.  I thought that was kind of interesting.

Custom Mapping

Finally, as I described above, I have to do some custom mappings to get the work items from our custom SCRUM template to the hosted TFS Agile template (we want to use some fields that are in Agile and not in Scrum in the hosted TFS, and process templates can’t be customized yet). 

Click the “Custom Settings” button, which gives you a dialog with the settings XML.  I chose to edit the XML in VS and then just paste it in here.  Here’s the top section of what mine looks like:

Custom Settings XML

Here is a zip that contains the settings I used.  You can use it as a starting point and then adjust as needed.  There is a section at the bottom that attempts to map users from the source to destination TFS systems, but I couldn’t get that to work either. 

That’s it – SAVE and START.  Watch the progress, and check for any conflicts. If you get some, you’ll have to resolve each issue independently.  It looks like the tool does a bunch of analysis and validation first, and will only proceed to the actual migration when there are no conflicts.   

Results

When it’s done, I go look at the source in the destination hosted TFS, and see the version history!  (Notice how the check-ins come through as the user that was running the tool, with some extra text to show when/how it was done.)

Results Source

Hope that helps.

Snoop 2.8.0 Released – New Features

I have written before about being part of the Snoop team – contributing to the open source code base for the WPF spying utility.  In the past couple months, we gained some good momentum on some cool new features, and now we are proud to announce the availability of the next release:  Snoop 2.8.0. Special shout out to Cory Plotts for all his work in getting this release out.

Download the latest installer here.

Here is a rundown of some of the new features in the new version.

Capture Changes to Properties

My coworkers and I often times end up using Snoop as a kind of “run-time designer”.  Blend can only go so far to show us the run-time look of a given view or control, so we Snoop into our running application and make changes to various properties (font sizes, margins, alignments, etc).  We make these changes interactively, deciding what looks good, and then have to adjust the XAML code accordingly.  If you’ve made changes to more than a couple properties, it’s easy to forget what all the changes were.  This feature will save you.

Whenever you close the Snoop window or the Snooped application, any changes you’ve made to properties during that Snoop session are copied to the clipboard.  You can then paste them into a text editor somewhere to get the concise list of changes.  You don’t have to wait until you stop Snooping to get the list of changes. You can also hit CTRL+SHIFT+C at any time while Snooping to copy the changed property info to the clipboard.  Here’s an example of a set of changes:

Snoop dump as of 2012-10-08 16:34:23
--- OBJECTS WITH EDITED PROPERTIES ---
Object: [008] Rect1 (Rectangle)
Property: Fill, New Value: #FF00208B
Property: Height, New Value: 40
Object: [008] Rect4 (Rectangle)
Property: Canvas.Top, New Value: 45
Object: [007] txtInput (TextBox) 10
Property: Text, New Value: hello world

The output is organized in a hierarchy, listing each object that contains at least one property that was changed, and then grouping all the changed properties on that object together.  The identifier for the object matches how it’s displayed in the Snoop visual tree UI.

 

Show Binding Paths

WPF is all about Bindings.  You have a ton of them in any “real” WPF app.  Often, even when the binding is working, I want to see the Path the binding is using, maybe as a way to quickly find the VM property backing a particular UI element.  I decided that a bound property in Snoop should show you the Binding.Path right away, without making you delve into the Binding to see that important info.  Here are some screenshots of various bindings in my sample app:

Basic Binding – shows full property path in brackets next to the value

CropperCapture246

ElementName Binding – shows both the Path and ElementName

CropperCapture255

MultiBinding – shows each Path and optional ElementName

CropperCapture240

PriorityBinding – shows each Path in priority order

CropperCapture256

 

Show Resource Keys

Since we’ve moved the Snoop source code to GitHub, we’ve been receiving some pull requests with various feature implementations.  One of them, from Dawn Wright, is simple, yet powerful.  Now when Snoop displays a Brush or Style that is a keyed resource, it will show you the x:Key property of the resource.  This is super valuable when Snooping through an app that you didn’t write, allowing you to find the resources a given object uses.  Here’s are a couple examples:

Inline Brush – not referencing a keyed resource (notice any Type is now in [brackets] )

CropperCapture257

Brush that References a Keyed Resource

CropperCapture259

Style that References a Keyed Resource

CropperCapture258

 

PowerShell Integration

I’m not a huge PowerShell user, but I can already tell this contribution from Bailey Ling is a killer new feature.  There is a new PowerShell tab in the Snoop window that lets you party on your objects, using PowerShell commands.  The most basic commands that I know are to set a variable that points to the DataContext of one of your elements, then start making calls to properties and methods in that object graph.

$vm = $selected.Target.DataContext
$vm.Name = “foo”

The possibilities are endless…

Check out Bailey’s introduction to the concept in his two blog posts. One and Two.

CropperCapture267

Other Minor Enhancements

ESC key now clears any current search filter.

If Snoop throws an exception from within itself, we catch it and don’t blow up your app.

Visual Studio 2012 – What’s in a Version?

.NET Framework and CLR

We know that Visual Studio 2012 ships with and installs the new .NET Framework 4.5, which comes with new features such as portable class libraries, async/await support, async file I/O, and enhancements to W*F to name a few.  This time, instead of installing the .NET framework side-by-side with previous versions, this one REPLACES the previous one (.NET 4).  What does this mean?

.NET 2.0, 3.0, and 3.5 ran on top of the .NET CLR 2.0.  Then .NET came along with its own .NET 4 CLR, but it installed “next to” the 2.0 CLR.  If you run a .NET application targeted at 2.0 through 3.5, your app would be using the .NET 2.0 CLR.  If you run a 4.0 application, it would use the 4.0 CLR.   Now .NET 4.5 comes along, and you would think it would either (a) run against the existing 4.0 CLR, or (b) come with it’s own CLR.  Well, it does neither – it REPLACES the 4.0 CLR with its own, the 4.5 CLR.  Clear as mud?  Here’s a good picture from Microsoft that describes the landscape.

DotNet Versions and CLRs

What gets difficult is identifying what version of the .NET framework you have installed on your machine, either by hand or programmatically at runtime.  Since the 4.5 CLR replaces the previous one, it lives in the same exact place on disk – most of it at %WINDIR%\Microsoft.NET\Framework\v4.0.30319.  This version-based name of the folder did not change between 4.0 and 4.5.  What DID change are the files contained in the folder. 

 

  .NET 4.0 Installed .NET 4.5 Installed
MSCOREI.DLL 4.0.30319.1 4.0.30319.17929
     
  Target .NET 4.0 Target .NET 4.5
Environment.Version 4.0.30319.17929 4.0.30319.17929

Notice the Environment.Version property returns the same thing whether you’re targeting 4.0 or 4.5 framework in your project.  This is because this property returns the CLR version (remember 4.0.30319.17929 means 4.5), not the version of the framework you’re code is targeted to. 

What’s Installed?

You might be asking, “how do I know what’s installed on this machine?”.  In the past, I’ve gotten used to opening the %WINDIR%\Microsoft.NET\Framework directory structure and checking the folder names, but that’s not enough with 4.5.  There are 3 ways that I know of:

  1. Check the FILE VERSION of one of the .NET framework assemblies (MSCOREI.DLL, SYSTEM.DLL, etc)
  2. Check the Programs and Features list for “Microsoft .NET Framework 4.5”.  To complicate matters, it shows a version of 4.5.50709.
  3. Check a gnarly registry key:  HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full.  If there’s a DWORD value named “Release”, then you have .NET 4.5.  If it’s greater or equal to 378389, then you have the final released version of .NET 4.5. 

Code Compatibility

You can run .NET 4.0 code on a machine with .NET 4.5 installed, that’s just standard backwards compatibility.  But can you run .NET 4.5 code on a machine with .NET 4.0?  Since the CLRs are “pretty similar”, only with 4.5 having more features, the 4.0 code will  actually run, but only if you don’t use any 4.5 specific features that are not present in the 4.5 CLR.  For example, if you use the new 4.5 async/await pattern in your code, it will blow up on a machine with only 4.0 installed.

Entity Framework

I mention EF here not because it comes with Visual Studio 2012 (it’s available via NuGet), but because it has some interesting version behavior as well.  EF 5 comes with support for Enum types, which means your database tables will have first hand knowledge of the Enum values, instead of forcing you to use strings and lookup tables.  However, that support for Enum is built upon new functionality in the .NET 4.5 framework.  So if your project is targeting .NET 4, you don’t have Enum support.  What’s tricky about this is the way NuGet sets up the references to Entity Framework for you.  It’s smart about it, but it’s not obvious.

When you use NuGet to add EF support to your project, the installation process detects what .NET version your project is targeting.  If it’s 4.0, your project will reference the 4.4 version of EF, the one WITHOUT support for Enums.  If your project is targeting .NET 4.5, your project will reference the 5.0 version of EF, and WILL have support for Enums.  You can see which one you have based on the version number of the EntityFramework referenced assembly.

Targeting .NET 4.0 Targeting .NET 4.5
CropperCapture236 CropperCapture237

 

Well that’s enough version information for one post.  Hopefully this gives you a hint as to what is going on in your application when you’re running on machines with different versions of the .NET Framework.