Crossing The Finish Line With IMSA


As Tim called out the other day, we recently went live with a brand new mobile experience for IMSA, and I had the privilege of leading the engineering team on the project here at IK. The scope and timeframe of the project were both ambitious: we delivered a brand-new, content-driven mobile app with live streaming audio and video, realtime in-race scoring results, custom push notifications and more, across all major platforms (iOS, Android, and Windows), with custom interfaces for both phone and tablet form factors – all in a development cycle of about twelve weeks. It goes without saying that the fantastic team of engineers here at IK are all rockstars, but without some cutting-edge development tools and great partnerships, this would have been impossible to get across the finish line on time:

  • Xamarin allowed our team to utilize a shared codebase in a single language (C#, which we happen to love) across all of our target platforms, enabling massive code reuse and rapid development of (effectively) six different apps all at once.
  • Working closely with the team at Xamarin enabled us to leverage Xamarin.Forms to unlock even further code-sharing than would have been otherwise possible, building whole sections of the presentation layer in a single, cross-platform XAML-based UI framework.
  • On the server side, our partners at Microsoft’s continued world-class work on Azure made utilizing Mobile App Service (née Azure Mobile Services) a no-brainer. The ability to scale smoothly with live race-day traffic, the persistent uptime in the face of tens of thousands of concurrent users making millions of calls per day, and the ease of implementation across all three platforms, all combined to save us countless hours of development time versus a conventional DIY approach to the server layer.
  • Last but not least, being able to leverage Visual Studio’s best-of-breed suite of developer tools was essential to the truly heroic amounts of productivity and output of our engineering team at crunch time. And Visual Studio Online enabled the Project Management team and myself to organize features and tasks, track bugs, and keep tabs on our progress throughout the hectic pace of a fast development cycle.

The final result of this marriage between cutting-edge cross-platform technology and an awesome team of developers is a brand new app experience that’s available on every major platform, phone or tablet, and this is just the beginning – we have lots of great new features in store for IMSA fans worldwide. I’ll be following up with a couple more technical posts about particular challenges we faced and how we overcame them, and be sure to check out the next IMSA event in Detroit the weekend of May 29-30; I know I’ll be streaming the live coverage from my phone!

Visual Studio 11 Solution Upgrading

If you’re an early Visual Studio adopter like me you’ve probably gotten used to running into a similar problem every few years: upgrading your code. Although VS usually handles the job of converting files to work in the newer version, the big problem has usually been trying to go back to the old version. If you’re the only one working with the code that might not matter, but if you work on a team that has a mix of, for example, VS 2008 and VS 2010 Beta clients, the conversion process causes problems for one or the other.

Finally in VS 11 this problem is being addressed. The ultimate goal is to be able to open a VS 2010 solution in VS 11 with minimal conversion and applying no breaking changes to the project or solution files that would prevent it from still opening directly in 2010. There will of course be some restrictions, like not upgrading to .NET 4.5, but in general seems like a pretty reasonable goal.

To see how it’s working so far with the Developer Preview I tried upgrading a few of my own projects, including a 50+ project solution with lots of complications. The good news was that the upgrade process did succeed without preventing VS 2010 from opening the converted solutions that I tried. There were, however, some issues.
Continue reading

Remote Debugging

Recently we’ve been developing an application that runs on a Win7 PC and has a slimmed down version that runs on Win7 Tablets. In our case we’re using the HP Slate. Our development machines are running Win7 and VS2010. The development machines are on our domain while the HP Slates are not. The project has been going for some months now and we’ve been relying heavily on logging using Enterprise Library to know what errors we’re up against. However, we all know that as the complexity of an application increases so does the likelihood that we may log incorrect, useless, or unrelated data. Enter the need for remote debugging. I thought this would be straightforward and easy but I was very much mistaken.

Getting Ready to Remote Debug
I started on the (MSDN http://msdn.microsoft.com/en-us/library/y7f5zaaa.aspx) in the section (How to: Set Up Remote Debugging http://msdn.microsoft.com/en-us/library/bt727f1t.aspx). Here I learned that firewalls and permissions are the first huge hurdle called out by Microsoft. I must say when it comes to limited time firewalls are out. So instead of fighting my way through setting up firewalls to work correctly I just disabled them for all machines involved for this remote debugging session. Later in this same article it tells how to install the software required for the remote machine in order to debug remotely. There is broken link to the download center for the software required, but the optional location is on the VS2010 installation media in the folder (VS Media)\Remote Debugger. Be sure to install the correct version on the remote machine. Instructions for this are in the article.

Running the Remote Debugging Monitor
STOP! Why? Because you really should figure out which users you’re going to use on both the VS2010 host machine and the remote machine before trying to run the Debugging Monitor.
Selecting the Appropriate User Accounts for Remote Debugging
I tried to remote debug at this point and found there is one more piece of information that I was missing. Going back to the list of articles on the (MSDN http://msdn.microsoft.com/en-us/library/y7f5zaaa.aspx) the next important section is (Remote Debugging Across Domains http://msdn.microsoft.com/en-us/library/9y5b4b4f.aspx). This section lists the restrictions relating to user accounts right away. I will copy the first section here for review:
To connect to msvsmon, you must run Visual Studio under the same user account as msvsmon or under an administrator account. (You can also configure msvsmon to accept connections from other users.)
Visual Studio accepts connections from msvsmon if msvsmon is running as a user who can be authenticated on the Visual Studio computer. (The user must have a local account on the Visual Studio computer.)
With these restrictions, remote debugging works in various scenarios, including the following:

  • Two domains without two-way trust.
  • Two computers on a workgroup.
  • One computer on a workgroup, and the other on a domain.
  • Running the Remote debugging monitor (msvsmon) or Visual Studio as a local account.

Therefore, you must have a local user account on each computer, and both accounts must have the same user name and password. If you want to run msvsmon and Visual Studio under different user accounts, you must have two user accounts on each computer.

You can run Visual Studio under a domain account if the domain account has the same name and password as a local account. You must still have local accounts that have the same user name and password on each computer.

End Quote.

We will cover the last part in detail since that is the scenario I was faced with. In simple terms if your domain is named us and the user account is danny (us\danny) then the VS2010 hosting machine must have a local user account named danny with admin privileges with the same password as the us\danny account. Also, the remote machine, in my case the HP Slate, must have a local user account named danny with the same password as the other two danny accounts and must also be an admin on the machine. After all accounts are created and have been accessed for the first time restart all machines and boot the VS2010 hosting machine into the domain account (us\danny) and the remote machine into the local account with the same name as the domain account (machine name\danny).

This essentially means that three(3) accounts named danny must exist: us\danny, vs2010 machine name\danny, and remote machine name\danny and all of them must be admins on their respective boxes and all must use the same password.

Running the Remote Debugger (For Real)
Again returning to the list of articles on the (MSDN http://msdn.microsoft.com/en-us/library/y7f5zaaa.aspx) the next article is (How to: Run the Remote Debugging Monitor http://msdn.microsoft.com/en-us/library/xf8k2h6a.aspx). You have two choices at this point. Run the monitor as needed or run the monitor as a service. Because we will be constantly debugging using the HP Slate I decided to run the monitor as a service which requires a little extra work. You will need to run Local Security Policy. Then under Local Policies\User Rights Assignment there is a policy named Log on as a service that must be granted for the user that will be running the debugging monitor.
Performing a Remote Debug
I’m still struggling with this a little. I cannot figure out how to have VS2010 compile and launch the application I want to debug on the remote machine. Instead copy the compiled project files from the bin\debug folder of your project to the remote machine (I suggest a location easy to access and replace files in). Run the executable on the remote machine then in VS2010 of the hosting machine open the Attach to Process dialog (ctrl + alt + p). There is a drop down name Qualifier that when opened will detect the machine you’re trying to remote debug. Select the remote machine then attach to the process on the remote machine like you would as if the process were on the VS2010 hosting machine.

Thoughts
It only took a few hours to figure this all out. The part that kept killing me was the fact that even after you create a user account if you don’t log on as that user then it will not be available for use. Remote debugging is definitely cool! It helped us figure out quite a few issues in our remote code very quickly. Now that I know how to set up a remote debugging environment I will be using it more often.

Initial Thoughts on Blend 3 MIX Preview

After trying out the new features in Blend 3 I’m definitely looking forward to the full release. It already seems to be more responsive than Blend 2 SP1 while navigating around files, templates, selected elements, etc, even at this pre-Beta stage. I was disappointed to see SketchFlow missing from the Preview, but look forward to it being included when it’s stable enough.

There are a bunch of nice UI tweaks:

  • Tool Panels now allow pinning (like in VS) to autohide. Data and Objects and Timeline are also now their own panels so they can move around freely. The left side tool strip now floats and docks too.
  • The Active Container is no longer a static setting. It’s still possible to set it to a fixed object, but now it must be done from the right click menu (Pin Active Container) instead of double clicking. I was annoyed by this at first and wondering why there was no yellow selection border, but then realized that Blend was inferring a container based on what I was doing. Double clicking to add an element now uses the current selection as the container to add to, or its parent if it doesn’t support children. A blue border takes the place of the yellow indicator in these cases.

Even better, when adding by dragging directly on the design surface, the container is inferred graphically and switches (with blue border) as you move the cursor around.

  • Like the active container setting, elements are similarly highlighted with a blue border as you mouse around with the selection arrow tool, making it easy to find elements. This includes elements that are buried in layers of panels, and selecting an element is as easy as clicking on it when it’s highlighted, virtually eliminating the need for using the right click “Set Current Selection” menu item (although it is still available). I haven’t had to use it yet once.
  • Design-time sizing is available on more root container than before, including both Control and Data Templates. Rather than being set on the template itself the size is set on the first element in the tree.

  • The old Gradient tool that was pretty much only useful for rotating gradients now allows direct setting of stops on the design surface. Each one is shown as a circle and can be dragged to a different spot or edited in place (with popup color picker) by double clicking. Alt-clicking adds new stops and dragging a stop out of the gradient area removes it. The Properties panel brush editor itself has also added the ability to set a gradient stop position as a numeric percentage (great for translating from Illustrator) and reverse the order of gradient stops.

  • Annotations allow for design notes to be added directly to the UI that only show up in Blend (think comments for designers). I tried adding some in a real WPF project and the attached properties they created didn’t seem to have any effect on VS or Blend 2. I haven’t figured out where the AnnotationManager class lives so I’m hesitant to leave notes on files that other people need to edit without Blend 3 but at least they didn’t cause any immediate backward compatibility issues on my own box.
  • New keyboard shortcuts abound. Look around the menus to find them – the help hasn’t been updated yet.

My favorite part of the new version is design time data – something I’ve been doing awkward workarounds for since I started using Blend. There are lots of options, but my favorite so far is the generated sample data source. I first tried importing an XML file, which gave me a full set of data and showed all the available fields, including nested collections. I then set up another data source from scratch, using only the Blend UI to define the data structure:

Anywhere there’s a “+” a simple, complex or collection property can be added. Simple properties can be set as String, Number, Image, or Boolean types, with further options for each. String properties can be set to things like Names, Dates or Phone numbers to provide realistic looking data. Images of chairs are provided out of the box but can be replaced by pointing to a folder of image files of your choice.

All of this results in a generated class that includes a bunch of dummy data and is declared as a resource in location of your choice. Any element from the data source can then be dragged onto the design surface to set up a binding or generate a bound control. I haven’t quite gotten the hang of predictably hooking up data by dragging but through a combination of dragging elements and using Blend’s data binding window (which consistently displayed my test data structure through the Data Context tab) I managed to quickly set up a complete multi-level master detail UI for the structure I set up without jumping into XAML at all. An option allows you to switch between using the data only at design time (d:DataContext) or at design and run-time (DataContext) – this should be good for initial testing of a UI.

Through a few hours of working on a live WPF project I haven’t had any problems using the new version and hopping back and forth to VS. I even tried opening in Blend 2, 3, and VS and making changed in all 3 with no problems. I haven’t done much more than start up a new Silverlight 3 project but that worked fine too and the Preview install doesn’t appear to have negatively impacted my existing Silverlight 2 dev environment.

Get the preview here

and if you’re using TFS look at http://code.msdn.microsoft.com/KB967483 for the hotfix needed to hook up the Preview’s source control integration.

Setting Up Test Data for Blend

One of the big challenges in using Blend to set up an application that relies heavily on dynamic data is that without realistic data available at design-time, what you see isn’t what you get at run-time. Blend includes some basic built in data generation that can be useful for simple data structures but this won’t really help much if you have conditional layout that is expecting specific values.

If all you need is a generic set of data to fill out an ItemsControl you can use Blend to generate design-time data for itself. Select the control to generate data for and open the data binding dialog for the ItemsSource property (either directly or through the “Bind ItemsSource to Data…” context menu option). Locate the collection to bind to and click the “Define Data Template” button at the bottom. This dialog can set up a basic data template (that will probably need some tweaking later) and more importantly includes the “Generate sample data” checkbox at the bottom. Selecting this adds a new d:UseSampleData=”True” setting on the ItemsControl (along with the d and mc xmlns’s used by Blend) which signals Blend to create data for the types of the items in the bound collection. Unfortunately, the values generated are pretty limited. Strings are fruits, ints are small values, and enums are chosen at random (in the latest 2.0 SP1 version; earlier versions didn’t do enums).

When the built in generation isn’t enough test data can be made available in a number of ways but there are some challenges to getting it to show up. In most cases the key to getting Blend specific data relies in some way on a check of the DesignerProperties.IsInDesignMode Attached Property. This value is false at run-time and true at design-time in Blend. It can be checked with a call to DesignerProperties.GetIsInDesignMode, passing a DependencyObject instance or statically with the more verbose (bool)DependencyPropertyDescriptor.FromProperty(DesignerProperties.IsInDesignModeProperty, typeof(DependencyObject)).Metadata.DefaultValue.

Since Blend skips the constructor and many initialization event handlers declared in the code-behind of the view being loaded, anything that is hooked up in those methods will not be called in Blend. This can make it tricky to even get to code that can check IsInDesignMode and generate your data. To ensure code gets called it needs to somehow be referenced from XAML. A few ways to do this:

  • A property assignment of a new object with generation code in the constructor

<Window.DataContext>
    <local:ViewModel />
</Window.DataContext>

  • A Binding to a property that includes generation code

DataContext=”{Binding RelativeSource={RelativeSource Self}, Path=ViewModel}”

  • An assignment to a custom attached property that includes generation code in its set method

For any of these methods you should either use an IsInDesignMode check or make sure that everything being run will work at both Blend design-time and run-time. See my post on debugging Blend errors for more on Blend unsafe code. The simplest way to actually get the test data from your code to Blend is by assigning it to a DataContext property of some element (as in the examples above). If you plan ahead you should be able to get both your run-time and design-time data through the same DataContext properties which will keep your Binding behaviors consistent.

Sharing Assemblies Between Silverlight and WPF

One of the significant benefits of Silverlight is the ability to share code with WPF desktop applications. Unfortunately, in practice there are quite a few hurdles to sharing code, due mainly to the restricted set of classes available in Silverlight’s framework.

One consequence of the way Silverlight is built is that class library projects can only be referenced if they are created specifically as Silverlight projects, and Silverlight class libraries can’t be used by standard .NET projects. The standard workaround for this is to create 2 projects, one Silverlight, the other standard .NET, and share all of the code files by either keeping the projects in a shared folder or by adding them as links to one of the assemblies. This is ok but creates a dual-maintenance headache and also requires that all “shared” projects are actually compiled twice into separate assemblies that must each be managed.

Fortunately, the restrictions on referencing projects are primarily a mechanism in Visual Studio and if you’re careful it’s actually possible to trick it into using a single project that can be referenced by Silverlight or full .NET applications. If you can keep to basic parts of the BCL that are available everywhere you can even compile to a single assembly that will work in both environments!

I’m still working out how to get this completely set up in a practical application but I’ll post more when I do.

Using CodeRush and Resharper together

I love developer tools. I’ve been an avid user of of Developer Express’s CodeRush and Refactor Pro for years and have gotten so used to having them that their shortcuts have become reflexes and I get uncomfortable when I have to use Visual Studio without them. I’ve also come across a lot of devoted Resharper users who feel the same way about their favorite tool. The ongoing debate about which is better I think is a matter of personal preference.  Both of these tools provide lots of extra capabilities to VS and I would recommend that anyone that spends a lot of time in Visual Studio try them both.

For a long time I wanted to try adding Resharper on top of the DevExpress tools but was always too worried about crippling CodeRush or eating all of my machine’s resources. I had tried Resharper a few times in the past but had quickly given up when I saw how much memory overhead its background compiling added. A few months ago I started working with a team that all use Resharper and was impressed with some of the features it added, especially in the XAML editor. This finally gave me the extra nudge I needed to get the 2 products to work together and I’m happy to report that if you can afford to get them both and your hardware can handle it they do play well together.

I’ve now gotten quite comfortable with running the current versions of the DevExpress products (3.0) and Resharper (3.1) in both VS05 and now VS2008. A few tips if you’re going to try them both:

  • I’ve used the pattern of installing CR and Refactor, then Resharper. I ran into a problem at one point with losing some DevExpress settings after the Resharper install so I now zip up and then restore my entire Settings folder when installing Resharper.
  • Both products have their own expansion template languages (like more powerful snippets). CodeRush includes many more templates out of the box and is my default choice but each template language has it’s own strengths. My setup uses Space for CR expansion and Tab for Resharper expansion (I think those are both the defaults) so I can use either one or both for new templates, depending on which language fits the specific template I’m creating. CR’s type substitution makes for more flexible templates in most cases. For templates that surround the current selection, Resharper’s “Surround With” templates are quicker to add (but less flexible) than the corresponding CR feature.
  • A few features common to both products can conflict: things like auto-closing braces and on the fly formatting. I got around a bunch of minor problems by turning off everything in the “Editor” options for Resharper. A lot of other common features (like refactorings and navigation) can be left on because they’re triggered in different ways. The more you use them the more comfortable you’ll get with mixing features from both.
  • You will use more RAM. CodeRush and Resharper use somewhere in the neighborhood of 100-150 MB and Resharper uses about 75 MB plus more for background compiling depending on the size and complexity of the open solution. For large solutions Resharper can account for half of the total VS memory usage. A project I’m currently working on has a solution of ~20 projects that runs at about 850 MB with both tools running, 400 MB of which is used by Resharper. Both tools can be flipped on and off from the Add-in Manager if you want to temporarily do without their overhead and features.
  • I have not tried doing a complete removal of both tools so if you’re thinking about just trying this out do it in a VPC or on a system you wouldn’t mind rebuilding. I’ve read about people having VS problems after removing past versions but I’m not sure how much uninstall has been improved recently for either product.
  • Don’t give up on either product if it feels like it’s slowing you down at first! You will need some time to train yourself to do things in new ways but it will become much easier the longer you do it.

Here are some of my custom CodeRush templates for WPF in XAML and C#: CodeRush templates
These are some Resharper templates for XAML: Resharper templates

Get more info or download trials: Developer Express CodeRush and Refactor | Jetbrains Resharper

Moving to Visual Studio 2008

After trying out the Orcas Betas I was very disappointed that the XAML editor had actually been made harder to use, especially after using the VS 2005 November 2006 CTP addin for so long. There were some great additions to the editor and it seemed to be more stable at least in the time that I spent in it but there were some usablity differences that I found infuriating and forced me to stop using the Betas for my WPF development. Now that I’ve been using the RTM version for a while I’m happy to report that the XAML editor is much more usable thanks to the new features that made it into the final product.

Now if you’ve gotten used to the old XAML editor (or the plain XML editor, which the CTP version was built on) closing your tags for you and adding quotes you won’t need to stop every 5 keystrokes to go back and add all the stuff you expected to be generated when using Orcas. The new default view option is further icing on the cake. These issues were brought up by a lot of people during the Beta process and obviously were added into the product near the end of the development cycle as indicated by the separate Options dialog page for their settings as well as their absense from the Betas. This really shows that MS does heed the feedback they get during the CTP/Beta process.

A few additional WPF features that you get by moving from the 2005 CTP to 2008 RTM:

  • The designer actually works more than 10% of the time! It’s still quite possible to “Whoops” it and it’s still short of the rendering ability of Blend but it’s a big step from the CTP. It supports double-clicking to create event handlers but unfortunately I haven’t yet found a way to select anything other than the default event like you can in Blend and in the WinForms/ASP.NET designers.
  • Intellisense doesn’t break when you reference a custom xmlns. It’s also much more geared toward XAML now that it’s not based on the XML schema Intellisense. It also works for adding xmlns definitions and lets you choose from a complete (more or less) set of .NET namespaces in assemblies referenced by your project.
  • Document Outline now works for XAML and even has an element level preview feature that renders an element as you mouseover it in the outline.
    Document Outline

    The preview does require that the design view be rendered but the outline itself is always there even if you’re staying in the text view. The outline breadcrumb familiar from the ASP.NET designer also appears at the bottom of the editor in both modes and shares the same preview functionality.
    Breadcrumb

  • Expanded syntax highlighting provides separate font settings for XAML which consist of all the normal XML categories plus Markup Extension Class, Markup Extension Parameter Name, and Markup Extension Parameter Value. There’s also opening/closing tag highlighting that works just like the bracket highlighting in C#.
  • Formatting options let you set a tag wrapping length so elements with many attributes will get wrapped to the next line or you can have every attribute placed on its own line.
  • The Property dialog works for XAML. Like the Document Outline preview it requires that the designer renders the XAML, but after that it can be used even in the text only editor where it magically adds new attributes for any settings that you change. This should be especially helpful for beginners who would otherwise have to waste time digging through Intellisense to find the name of the property they want to set.
    Property Grid
  • The integrated Zoom control lets you zoom in or out on the designer just like in Blend. It also has a fit to window button that toggles to 100% and the maximum viewable.
  • Solution Explorer knows that you’re in a WPF project. Now if you right-click on your project and Add -> User Control it gives you a WPF rather than WinForms User Control!
  • There is real backward compatibility with 2005 CTP so you don’t need to be held back by your poor coworkers that can’t upgrade. When moving a project you will need to run through the upgrade wizard the first time and you will need to plan on using a separate 2008 solution as 8.0 and 9.0 slns are not compatible. The project upgrade process will make a few changes including adding some new project properties (which will be ignored by 2005), associating some new sub-elements (Generator and SubType) with included XAML files, and removing the explicit reference to the Microsoft.WinFX.targets MSBuild file. This last change is the only one that will mess with 2005 so once the upgrade is completed, open the project as XML and add  <Import Project=”$(MSBuildBinPath)\Microsoft.WinFX.targets” /> near the end of the file right below the CSharp/VisualBasic Import statement. After that the project should work in both versions.

One additional tip: 2008 is much more stable on machines that have not had previous Beta installations. I know it’s a standard recommendation but my personal experience has been that the WPF designer especially is much more prone to crashing on my machines that were running Beta 2 and had it uninstalled than my machines that were clean OS installs or only had 2005.

Triggers and States in Blend

After using the Expression Interactive Designer CTPs and getting used to that UI, I’m now trying to learn the completely different Blend UI. After some confusion with timelines and the new UI I finally got a state based property change working in a template in Blend.

   

The goal is to produce XAML like this from Blend:

   

<CONTROLTEMPLATE.TRIGGERS>

<TRIGGER Value=”True” Property=”IsChecked”>

<SETTER Value=”Blackadder ITC” Property=”FontFamily” />
<SETTER Value=”24″ Property=”FontSize” />

</TRIGGER>

</CONTROLTEMPLATE.TRIGGERS>

   

This should cause the template checkbox to switch to a large script font when it is checked and back to default when unchecked.

The obvious thing that the UI initially seems to push you towards is creating an animation timeline to do this. After entering template editing mode the Triggers window looks like this:

Trigger1

 

The +Property button then allows you to add a new Property based Trigger, which is what we’re looking for. After adding the new Trigger and picking from the property dropdown we get to:

Trigger2

At this point the obvious thing to do seems to be adding to the next step in the window, “Actions when activating”, but this will actually create the timeline we’re trying to avoid. What isn’t obvious is that you’re actually done with the Property-State Trigger and since recording is on any property changes you make now will be attached to the Trigger as Setters, just like we want. Over in the Properties->Text panel I change my font and then exit template editing mode. That’s it. Now when we click the checkbox we go from CheckMe1 to CheckMe2.

Another Blend oddity after coming from EID – there’s no more code-behind editor (which is okay because the EID one wasn’t even close to VS) so you can’t directly jump to event handlers. There is now integration with Visual Studio though so you can create event handlers in code from the Properties tool window just like a WinForm in VS and it will create and open the code into Visual Studio if you have it installed.

Events

WPF Dependency Property CodeRush template

I’m working on a WPF custom control and got tired of writing out the whole DependencyProperty every time so added a new CodeRush template to write the DependencyProperty registration, the read/write property using GetValue and SetValue, and an OnPropertyChanged static callback method. Here’s the template text:

«:#PropertyDefaultScope#»«TypeLink(“«?Get(Type)»”)» «Caret»«FieldStart»«Link(MyProperty)»«FieldEnd»«BlockAnchor»
{
get { return («TypeLink(“«?Get(Type)»”)»)GetValue(«Link(MyProperty)»Property); }
set { SetValue(«Link(MyProperty)»Property, value); }
}

public static DependencyProperty «Link(MyProperty)»Property = DependencyProperty.Register(
“«Link(MyProperty)»”, typeof(«TypeLink(“«?Get(Type)»”)»), typeof(«TypeName»),
new PropertyMetadata(«FieldStart»null«FieldEnd», new PropertyChangedCallback(On«Link(MyProperty)»«FieldStart»«Link(Changed)»«FieldEnd»)));

private static void On«Link(MyProperty)»«Link(Changed)»(DependencyObject dObj, DependencyPropertyChangedEventArgs e)
{
«Target»
}

It links the property type and name, callback method name, and fills in the parent class type. I set the generic template name to dp?Type? which isn’t otherwise used. This will save you a whole lot of typing if you’re working with WPF. Note: Workflow also uses Dependency Properties but they are based in a different namespace and slightly different syntactically.