About the Author

Dan Hanan is a lead software engineer at InterKnowlogy, where he works on a wide variety of software projects related to the Microsoft tech stack. At the center of his universe are .NET, C#, XAML-based technologies such as WPF, Silverlight, Win Phone 7, and Surface, as well as non-UI related bits involving WCF, LINQ, and SQL. In his spare tech-time, he is venturing outside the MS world into the depths of the Android OS. Come back once in a while to check out Dan's random collection of technology thoughts....better yet, subscribe to the RSS feed!

ASP.NET Membership Schema Created Automatically

UPDATE: A few weeks after posting this, I learned more about what’s going on with the membership providers. See the update here.

It’s been a few years since I’ve done any ASP.NET work. Recently, I fired up Visual Studio 2010 and created an ASP.NET MVC project that uses membership.  I remembered that I needed the table schema to support the membership functionality, which you can create by running aspnet_regsql.exe.  This tool shows a wizard that allows you to add or remove the database objects for membership, profiles, role management, and personalization.

aspnet_regsql

Here is the contents of the MembershipTest database after running the tool.  Notice the tables are prefixed with “aspnet_” and there are views and stored procedures to support the functionality.

dbschema-tool

Now we just edit web.config to set where the membership database is located (MembershipTest).

  <connectionStrings>
    <add name="ApplicationServices" connectionString="data source=localhost;Initial Catalog=MembershipTest;
        Integrated Security=SSPI;MultipleActiveResultSets=True"
      providerName="System.Data.SqlClient" />
    <add name="DefaultConnection" connectionString="data source=localhost;Initial Catalog=MembershipTest;
        Integrated Security=SSPI;MultipleActiveResultSets=True"
      providerName="System.Data.SqlClient" />
  </connectionStrings>

We haven’t written any code – we just have the MVC project that was created from the VS project template.  Run it, and register a new user, thus exercising the membership logic to create a new user in the database.  Check out the database schema. There are a handful of new tables listed, those WITHOUT the “aspnet_” prefix.

dbschema-run

When we look in the aspnet_users table (which was created by the aspnet_regsql tool), our user is not there.  Look in the Users table, and it IS there.  What’s going on here?  From what I can tell, the objects created by the aspnet_regsql tool ARE NOT USED by the latest SqlMembershipProvider and DefaultMembershipProvider.   So who is creating the objects required (those without the “aspnet_” prefix)?

So far, I haven’t found any documentation on this, but Reflector is our friend.  Looking through the provider assemblies, I find the code that is creating the database and schema at run time!

In the MVC project AccountController, we call Membership.CreateUser in the Register method.  Let’s find that method using Reflector.  Look up DefaultMembershipProvider.CreateUser, which calls the private Membership_CreateUser.  At the very top of that method, it calls ModelHelper.CreateMembershipEntities( ).

Method1

Follow that call chain down, and eventually you get to see the first part of the schema creation – the actual database.  After that, it goes through a whole bunch of code to generate the objects themselves.

Method2

So just to prove it to myself, I deleted the MembershipTest database, and ran my project again (same connection string in config, pointing to the non-existent MembershipTest database).  Run the project, create a user.  Sure enough, the database is created, the required objects are there (and we never ran the aspnet_regsql tool).  It seems that the newest providers don’t need any of the old Views or Stored Procedures either. None are created.

dbschema-clean

UPDATE: A few weeks after posting this, I learned more about what’s going on with the membership providers. See the update here.

The only thing I can come up with is that the ASPNET_REGSQL tool is obsolete.  Maybe with the advent of the Entity Framework Code-First technology, Microsoft took the attitude that they’ll create the DB objects in code if they’re not already there.

Note: it turns out that you don’t even have to run the web site project and register a new user.  You can use the ASP.NET Configuration tool (Project menu in VS) and create a user there.  It uses the same provider configuration, so that will also create the required database schema.

Even better – I ran this same test against SQL Azure, and it works fine as well.  Creates the database and objects, and membership works just fine (UPDATE: …but running the site IN Azure blows up, since the provider is not installed there yet. See follow-up post.)

 
 

Using Kinect in a Windows 8 / Metro App

We have been working with the Kinect for a while now, writing various apps that let you manipulate the UI of a Windows app while standing a few feet away from the computer – the “10 foot interface” as they call it.  Very cool stuff.  These apps make use of the Microsoft Kinect for Windows SDK to capture the data coming from the Kinect and translate it into types we can use in our apps:  depth data, RGB image data, and skeleton points.  Almost all of these apps are written in C# / WPF and run on Windows 7.

Last month a few of us went to the Microsoft //BUILD/ conference, and came back to start writing apps for the new Windows 8 Metro world.  Then naturally, we wanted to combine the two and have an app that uses Kinect in Windows 8 Metro.  At InterKnowlogy we have a “Kiosk Framework” that fetches content (images, audio, video) from a backend (SQL Server, SharePoint) and has a client for various form factors (Win7, Surface, Win Phone 7) that displays the content in an easy-to-navigate UI.  Let’s use the Kinect to hover a hand around the UI and push navigation buttons!  Here’s where the story begins.

One of the First Metro Apps to Use Kinect

Applications that are written to run in Windows 8 Metro are built against the new Windows Runtime (WinRT) API, which is the replacement for the old Win32 API that we’ve used since Windows 3.0.  The problem when it comes to existing code is that assemblies written in .NET are not runtime compatible with WinRT (which is native code).  There is a lot of equivalent functionality in WinRT, but you have to port existing source code over, make changes where necessary, and compile specifically against WinRT.  Since the Kinect SDK is a set of .NET assemblies, you can’t just reference it in your WinRT / Metro app and start partying with the Kinect API.  So we had to come up with some other way…

You CAN write a .NET 4.5 application in Windows 8, using Visual Studio 11 and it will run on the “desktop” side of the fence (alternate environment from the Metro UI, used for running legacy apps).  So we decided to take advantage of this and write a “Service” UI that will run in the classic desktop environment, connect to the Kinect and receive all the data from it, and then furnish that data out to a client running in the Metro side.  The next issue was – how to get the data over to our Kiosk app running in Metro?  Enter web sockets.  There is a native implementation of web sockets in the WinRT framework and we can use that to communicate on a socket channel over to the .NET 4.5 desktop which can reply to the client (Metro) socket with the Kinect data.

Some Bumps in the Road

Writing the socket implementation was not conceptually difficult.  We just want the client to poll at a given frame rate, asking for data, and the service will return simple Kinect skeleton right-hand position data.  We want to open the socket, push a “request” message across to the service, and the service will write binary data (a few doubles) back to the caller.  When pushing bytes across a raw socket, obviously the way you write and read the data on each side must match.  The first problem we ran into was that the BinaryWriter in the .NET 4.5 framework was writing data differently than the DataReader in WinRT was receiving the data.

As with any pre-release software from MIS, there is hardly any documentation on any of these APIs.  Through a ton of trial and error, I found that I had to set the Unicode and Byte Order settings on each side to something that would match. Note the highlighted lines in the following code snippets.

    // Send data from the service side

    using ( MemoryStream ms = new MemoryStream() )
    {
        using ( BinaryWriter sw = new BinaryWriter( ms, new UnicodeEncoding() ) )
        {
            lock ( _avatarPositionLock )
            {
                sw.Write( _lastRightHandPosition.TrackingState );
                sw.Write( _lastRightHandPosition.X );
                sw.Write( _lastRightHandPosition.Y );
            }

        }

        Send( ms.GetBuffer() );
    }
    // Receive data in the client 

    DataReader rdr = e.GetDataReader();

    // bytes based response.  3 ints in a row
    rdr.UnicodeEncoding = UnicodeEncoding.Utf16LE;
    rdr.ByteOrder = ByteOrder.LittleEndian;
    byte[] bytes = new byte[rdr.UnconsumedBufferLength];

    var data = new JointPositionData();
    var state = rdr.ReadInt16();
    Enum.TryParse&lt;JointTrackingState&gt;( state.ToString(), out data.TrackingState );
    data.X = rdr.ReadDouble();
    data.Y = rdr.ReadDouble();

    UpdatePositionData( data );

Once I got the socket channel communicating simple data successfully, we were off and running.  We built a control called HoverButton that just checks whether the Kinect position data is within its bounds, and if so, starts an animation to show the user they’re over the button.  If they hover long enough, we fire the Command on the button.

The next problem was connectivity from the client to “localhost” which is where the service is running (just over in the desktop environment).  Localhost is a valid address, but I would keep getting refused connections.  Finally re-read the setup instructions for the “Dot Hunter” Win8 SDK sample which tells about a special permission that’s required for a Win8 app to connect to localhost.

PackageFamilyName

Open a command prompt as administrator and enter the following command (substitute your package name for the last param):

    CheckNetIsolation LoopbackExempt -a -n=interknowlogy.kiosk.win8_r825ekt7h4z5c

There is no indication that it worked – I assume silence is golden here.  (I still can’t find a way to list all the packages that have been given this right, in case you ever wanted to revoke it.)

CheckNetIsolation

Finally, a couple other minor gotchas:  the service UI has to be running as administrator (to open a socket on the machine), and the Windows Firewall must be turned OFF.  Now we have connectivity!

What’s Next?

Beyond those two problems, the rest was pretty straight forward.  We’re now fiddling with various performance settings to achieve the best experience possible.  The skeleton data is available from the Kinect on the desktop side at only about 10 frames per second. We think this lower rate is mostly due to the slower hardware in the Samsung Developer Preview device we got from BUILD.  Given that speed, the Metro client is currently asking for Kinect data from the service at 15 frames per second.  We are also working on better smoothing algorithms to prevent a choppy experience when moving the hand cursor around the Metro UI.

Crazy WinRT XAML Bug in Windows 8 Developer Preview

I’ve been playing with the developer preview release of Windows 8 that we got from //BUILD/.  Everyday, we find more and more “issues” that are either just straight up bugs, things that are not implemented yet, or maybe even issues that won’t be resolved because WinRT is “just different”.  This one was especially “fun” to track down – and when we did, we couldn’t believe the issue.

Start with a simple WinRT application in Visual Studio 11.  Add a class library project that will hold a couple UserControls that we’ll use in the main application.  Pretty standard stuff.

Create 2 UserControls in the library, the first one will use the other.  UserControl1 can just have some container (Grid) then use UserControl2 (which just has an ellipse).  In the main application, add a reference to the ClassLibrary and use UserControl1 in the MainPage.  Your solution should look something like this:

Solution

  <!-- MainPage.xaml -->
  ...
  xmlns:controls="using:ClassLibrary1.Controls"
  ...

<Grid x:Name="LayoutRoot"
    Background="#FF0C0C0C">
  
  <controls:UserControl1 />
  
</Grid>
  <!-- UserControl1.xaml -->
  ...
  xmlns:controls="using:ClassLibrary1.Controls"
  ...

  <Grid x:Name="LayoutRoot"
      Background="#FF0C0C0C">
    
    <controls:UserControl2 />

  </Grid>
  <!-- UserControl2.xaml -->

  <Ellipse Height="40" Width="40" Fill="Red" />

Now here’s the key step.  Add a new class to the ClassLibrary, say Class1, and have that class IMPLEMENT INotifyPropertyChanged (be sure to choose the right using statement – the one from UI.Xaml.Data).

    public class Class1 : INotifyPropertyChanged
    {
        #region INotifyPropertyChanged Members

        public event PropertyChangedEventHandler PropertyChanged;

        #endregion
    }

That’s it.  Run it.

You will get a runtime exception:

Exception

Crazy, huh?  The Microsoft guys have confirmed this to be a known bug.  I just can’t believe what that bug would be?  How is a simple UNREFERENCED class that implements INPC getting in the way of a couple UserControls?

Ahh … the joys of coding against pre-release software…

Windows 8 Development–First Impressions

Along with 5000 other developers, I attended //BUILD/ in September and came home with the Samsung pre-release hardware loaded with Visual Studio 11 and the Win8 bits.  Since then, I’ve been writing a few apps, getting my feet wet writing code for Metro / WinRT apps in C#.  Here are some early impressions on the Windows 8 operating system in general, and then some thoughts on what it takes to develop Metro apps.

Don’t Make Me Hop That Fence Again

The grand new look of Windows 8 is the “Metro” style interface that you see right away on the Start Screen.  The traditional start menu is gone, you don’t have a hierarchy of StartScreenstart menu icons that you wade through, and you don’t have a bunch of icons scattered on your desktop.  Instead you have the clean looking, larger sized “tiles”, some of which are “live”, showing you changes to application data in real-time.    You can find lots of info on the Win8 Metro interface here, so I won’t talk about that.  Apps written for Metro, built against the Windows Runtime (WinRT) run over here on this “Metro” side of the fence.

What is so interesting to me though is that there exists an alternate world in Windows 8 – the classic desktop.  This is where legacy apps run, where the familiar desktop can be littered with icons, the taskbar shows you what’s running,etc. Everything from old C and C++ code to .NET 4 apps run over here on this “Desktop” side of the fence.

DesktopSideSo here then is the problem. Throughout the day, I run a few apps to check Facebook and Twitter (Metro), then I startup Visual Studio 11 (desktop), then I start Task Manager (Metro), then maybe a command prompt (desktop).  Each time I’m in one environment and run an app that runs in the other, the OS switches context to the other side of the fence.  This becomes SUPER ANNOYING over the course of a day.  I’m in the desktop, all I want to do is fire up a command prompt.  Click the start menu (which takes me to the Metro start screen), then choose command prompt, which switches me back to where I just came from and fires up the command prompt.  I’ve become accustomed to pinning all my necessary apps to the desktop side taskbar so I don’t have to go to the Metro screen to run time.

Enough of the Rant – What About Development?

The Windows 8 Runtime (WinRT) is a completely re-written layer akin to the old Win32 API that provides lots of new services to the application developer.  When sitting down to write a new app, you have to decide right away, are you writing a .NET 4.5 app, or are you writing a Metro app.  (This dictates which side of the fence you’ll be hanging out in.)  Some of the coolest features of the Win8 runtime are:

Contracts

Contracts are a set of capabilities that you declare your application to have, and the OS will communicate with your app at runtime based on the contracts.  Technically they’re like interfaces, where you SearchResultspromise to implement a set of functionality so the OS can call you when it needs to.  The most popular contracts are those that are supported by the “Charms” – Search, Share, and Settings.  These are super cool.  Implement a couple methods and your app participates as a search target, so the when the user enters some text in the search charm, your app is listed and you furnish the results.

 

 

protected override void OnLaunched( LaunchActivatedEventArgs args )
{
    var pane = Windows.ApplicationModel.Search.SearchPane.GetForCurrentView();
    pane.QuerySubmitted += QuerySubmittedHandler;

    pane.SuggestionsRequested += SuggestionsRequestedHandler;
    pane.ResultSuggestionChosen += ResultSuggestionChosenHandler;

    pane.PlaceholderText = "Search for something";

	// ...
}

private void QuerySubmittedHandler( SearchPane sender, SearchPaneQuerySubmittedEventArgs args )
{
    var searchResultsPage = new SearchTest.SearchResultsPage1();
    searchResultsPage.Activate( args.QueryText );
}

private void SuggestionsRequestedHandler( SearchPane sender, SearchPaneSuggestionsRequestedEventArgs args )
{
    var searchTerm = args.QueryText;

    var sugg = args.Request.SearchSuggestionCollection;
    for ( int i = 0; i < 5; i++ )
    {
        // just faking some results
        // here you would query your DB or service
        sugg.AppendResultSuggestion( 
            searchTerm + i, 
            "optional description " + i, 
            i.ToString(), 
            Windows.Storage.Streams.StreamReference
              .CreateFromUri( new Uri( "someurl.jpg" ) ),
            "alternate text " + i );
    }

    //defer.Complete();
}

protected override void OnSearchActivated( SearchActivatedEventArgs args )
{
    var searchResultsPage = new SearchTest.SearchResultsPage1();
    searchResultsPage.Activate( args.QueryText );
}

ShareSharing is just about as easy. Declare the contract in your manifest and your app is a share target.  When the user chooses to share to your app, the OS sends you the data the user is sharing and you party on it.

protected override void OnSharingTargetActivated( ShareTargetActivatedEventArgs args )
{
	var shareTargetPage = new ShareTestTarget.SharingPage1();
	shareTargetPage.Activate( args );
}

Settings leave a little bit to be desired in my opinion. You declare the contract, and add a “command” to tell the OS to put in the settings charm panel.  But when the user chooses that command (button), your app has to show its own settings content (usually in a slide out panel from the right).  This is a 4 step process to make any settings changes in your app (swipe from right, touch the button, change settings, dismiss the panel). I really hope in future versions they figure out a way to embed our application settings content right there in the settings panel, not in our own app.

    Windows.UI.ViewManagement.ApplicationLayout.GetForCurrentView().LayoutChanged += LayoutChangedHandler;
    DisplayProperties.OrientationChanged += OrientationChangedHandler;
    Application.Current.Exiting += ExitingHandler;

    SettingsPane pane = SettingsPane.GetForCurrentView();
    SettingsCommand cmd = new SettingsCommand("1", "Pit Boss Settings", (a) =>
        {
            ShowSettingsPanel();
        });
    pane.ApplicationCommands.Add(cmd);

Async Everywhere

The WinRT team has built the APIs with asynchrony in mind from day one. At //BUILD/, we heard over and over that your UI should NEVER freeze because it’s busy doing something in the background.  To that end, any API that could potentially take longer than 50 milliseconds is implemented as an asynchronous method.  Using the async and await keywords (we’ve seen these in a CTP for .NET 4) that are built into the WinRT-based languages, we can write our code in a flowing, linear fashion that makes sense semantically.  No longer do we have to use BackgroundWorkers and worry about marshalling results back to the calling thread.

    HttpClient http = new HttpClient();
    var task = await http.GetAsync( url );

    string responseText = task.Content.ReadAsString();

“Free” Animations

XAML support is implemented in WinRT as native code, and is available to use from C#, VB, and C++.  One of the coolest features they added to what we’re already used to in WPF & Silverlight is the “built-in” animation library.  Simply add any number of transitions on a container element, and any of the associated operations in ANY OF ITS CHILDREN will use that transition.  The list of built-in transitions include: Entrance (content appearing for the first time), Content (change content from old to new), Reposition (fluidly move elements when they get repositioned), Add/Delete (fade out/in and move existing items), and Reorder.

  <Grid>
    <Grid.ChildTransitions>
      <TransitionCollection>
        <EntranceThemeTransition
            HorizontalOffset="500" />
      </TransitionCollection>
    </Grid.ChildTransitions>
  </Grid>

Well, I’ve jumped all around in this post, and it’s long enough already.  Look for future posts with more specifics about features and code that we’re working into our apps.

Some Development Gotchas (aka Bugs)

Here’s a quick list of some issues we’ve run into these first few weeks.

  • You can only set DataContext in code-behind – future releases will allow setting in XAML
  • There are a handful of bugs in FlipView.SelectedItem.  If you’re binding to the SelectedItem property, you have to manually set the SelectedItem to null and then back to the correct property (get it from your ViewModel) in code-behind.  This will obviously be fixed in a future release.
  • You must use ObservableVector<object> based properties in your VMs if you are binding to ItemsControls.ItemsSource and want the UI to update when there are changes to the collection.  ObservableVector<T> does not exist in the class library – you can find source code for it in many of the SDK samples.
  • Use the correct INotifyPropertyChanged implementation (from Windows.UI.Xaml.Data, not System.ComponentModel)
  • There is no VS item template for custom controls in C#, so VS won’t make you the Themes\Generic.xaml structure.  We’re still tracking this one down. We’ve used a Style resource to describe the template for a Button-derived custom control and it works, but you don’t get access to the Template property in OnApplyTemplate, so we’ve resorted to using VisualStateManager instead of referring to template parts and changing UI that way.
  • I’m sure there are more to come … in fact, check out my co-worker Danny’s post (should be up in the next couple days) with more details on what we’ve been encountering while writing our first few Win8 apps.

Check out the Win8 Metro developer forums for ongoing discussions of many other issues.

Disable SurfaceKeyboard for a Particular Application

We are working on a Surface 2 based application that runs on Windows 7, but is going to be running on very large touch-based wall mounted displays.   These displays will allow multiple users to be interacting with the UI at the same time, making heavy use of multi-touch.

The built-in keyboard in Windows 7 is plugged into the Surface SDK, such that it shows whenever the focus goes to a SurfaceTextBox.   This is great for most uses … but we want to allow multiple users to be typing simultaneously.  For this reason, we wrote our own software keyboard that we can attach to any (Surface)TextBox using an attached property.  The keyboard shows up below or above (depending on available application real estate) the referenced control and supports any number of keyboards at a time.

The problem is – since a SurfaceTextBox is wired up to automatically show the built-in Windows 7 software keyboard, it ALSO shows automatically (or at least the small thumbnail button to allow the user to bring up the full keyboard) when the SurfaceTextBox gets focus.

I have been hunting all over the web for a couple days on how to disable the built-in Windows 7 keyboard.  I know you can turn it off at the OS level, but our users want to be able to flip to other applications and HAVE the keyboard available to them.  I found all kinds of posts about watching for SurfaceKeyboard events, hiding it immediately upon show, a couple static methods to try, etc.  All to no avail.

Finally I found an MSDN article about disabling the keyboard on a Tablet PC on a per-application basis.  Thought for sure this was not “related enough” to Surface, but gave it a whirl.  It works!

Unfortunately, it’s based on a registry key setting, so it’s not a programmatic solution – we’ll have to add this setting into our installer.

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\TabletTIP\DisableInPlace]
"C:\Program Files\My App\MyApp.exe"="1"

Original MSDN article

Update: here’s a screenshot of the registry with the setting in place.
Image: Disable Keyboard - Registry

Happy keyboarding…

Inheritance in Entity Framework Code-First

Every couple weeks I dig into the code first support in Entity Framework and continue to be surprised at all the different options you have to control the generated database.

Quick background:  Entity Framework 4.1 was released earlier this month and includes “code first” support.  This enables you to write plain old c# objects (POCO) that model your domain and then EF will generate a database schema for you the first time you run your code.

On my current project, we were noodling around the other day with ideas on the best way to represent an inheritance tree in SQL schema.  I decided to look at how the code-first guys are doing it, and opened my eyes to 3 common patterns.  I won’t go into them in super gory detail, Google can help you find plenty of articles on them.  Here’s just a quick rundown.

Table per Hierarchy (TPH)

In this pattern (the default in EF code first), there is a single table that represents all classes in the inheritance hierarchy.  This makes for very straight forward SQL to access the data, but relies on using NULLs in many columns since the table ends up having the complete set of all properties for all the classes involved.  EF adds a “discriminator” column to keep track of what class type is represented by a particular row.

Table per Type (TPT)

This pattern uses a different table for each class (including abstract base classes) in the hierarchy.  This seems the most “natural” to me.  The inherited properties are in the “base class” table, and the properties for subclasses are in their own tables.  These “derived” tables have ID primary key columns that are a foreign key to the base class table’s primary key.

The SQL needed to retrieve the requested class is not too crazy – it includes an INNER JOIN between the subclass and the base class.  I see warnings all over the place about being careful not to use this if the hierarchy is very deep.  This approach is capable of handling polymorphism just fine.

To achieve this in EF code first, you just use an attribute on the subclasses:

[Table("Student")]
public class Student : Person { ... }

 

Table per Concrete Type (TPC)

This final pattern supported by code first uses a database table for each non-abstract (concrete) class in the hierarchy.  Each table is self-contained, having all the properties required by a class, including its inherited properties.  There are no relationships between the tables in the database (they just seem related because they have common column names).

The SQL used to get data for a particular class *can* be pretty straight forward, you just hit the table that represents that class.  But there is no good way to achieve polymorphism in this pattern, since there is no relationship between the tables.  You have to know what table to use for a given class reference.

To use this pattern, you have to use the “fluent API” in code first – I couldn’t find an attribute related to TPC.

 

Bottom Line

As with many questions about patterns – which one I should use in a particular case really “just depends”.  I know I like TPT the best, seems the most natural to me, and I just have to keep an eye on the SQL and performance.

Let me know which one YOU like best.

NuGet Package Manager

I’ve been hearing and reading about NuGet on various podcasts and blogs, so I decided to give it a test drive today. First impression – super simple to use, and pretty powerful.

I won’t provide a full tutorial here, that has already been done many places (here, and here, and …). Just a few initial thoughts:

NuGet is a Package Management system for Visual Studio .NET. It plugs in as a VS extension, and then from there, you have a (PowerShell based) command-line right there as a VS tabbed window, or a UI-based dialog interface.

Within 3 minutes, I installed NuGet from here, restarted VS, listed a TON of packages, and then installed MvvmLight into my simple WPF application. From there, I’m off and running as if I had done all that manually.

I really like that it not only downloads and “installs” the package, it adds any necessary assemblies to your project references, and can even update your web.config or app.config file. SWEET. It looks like the downloaded assemblies go into a “packages” directory and they are referenced in the project from there (i.e. local references, not via the GAC).

To figure out:

  • Is the “packages” directory configurable?  We usually call ours “Referenced Assemblies” (done. see below)
  • Can the “packages” be put under source control in the project?

I’m going to go learn about Caliburn next — I wish it had a NuGet package (they’re apparently working on it)…

Update: Found out how to change the directory where the packages get installed. Issue #215 talks about it. Create a file named nuget.config at the same directory level as your solution. The contents of the file should be the following:

<settings>
	<repositoryPath>Referenced Assemblies</repositoryPath>
</settings>

Code-First Entity Framework

Update (12/08/10): This week the Entity Framework team released CTP5 of the code-first library.  I was hopeful that this new version would fix my issues below with creating a SQL Express database in an arbitrary directory or the ASP.NET App_Data directory.  Updated my code to use this latest CTP5 binary – no difference.

Just a short post this time about the cool new “Code-First development” option that the MS data team made available a few months ago.  There are already some great walkthroughs (ScottGu, Scott Hanselman) about this topic, so I’m not going to repeat it all here.  Below are just some thoughts on where I find it useful, and a couple issues I’ve run into.

I really like the concept:  instead of building the database first, just write the code (more precisely, write the model object classes) and at runtime, the code-first library will interpret properties on the classes, and use some conventions to come up with how those objects should be stored in the database.  You don’t have to add attributes to your classes or properties – the framework inspects everything at runtime. When the code runs the first time and the database doesn’t exist, the code-first library creates the database for you (from then on it tracks a “hash” of the structure of your objects and reacts when there has been a change since the database was created).  There are configuration options (set via attributes or a code-based API) for what to do if the database already exists, ways to fine-tune relationships between objects, etc.

Of course when we’re writing a production quality, large scale application, we will still create the database schema first, probably have a bunch of stored procs to go along with them, and then use Entity Framework to hit that database.  But…there are a class of applications that this code-first mentality is great for: quick and dirty demo apps and local structured data storage to name a couple.  In the past couple years, I’ve gotten into a pattern of writing “xml data providers” for various applications, that just read & write to a local XML file (usually for mock/demo data, offline capabilities, etc).  We always try to start with built-in serialization of our model objects, but inevitably the format isn’t good, or we need some customized format, so we end up writing our own XML parsing code.  This code-first addition to Entity Framework seems perfect for these situations – no data access code to write!

Since code-first supports SQL CE which does not require any SQL install, it ends up being just a .SDF data file local to your application – just like my old XML files that I used to parse!

Update (12/08/10): one of my co-workers pointed out that you must be using .NET 4 to use SQL CE without any SQL installation.  On a recent project, we used SQL CE, but in .NET 3.5 and still needed to install it before we could use it.

I’ve done the walkthroughs above and have it (mostly) working, at least with the SqlCe and Sql clients.

ERRORS / BUGS (?):

The main problem I’m having is creating the database on first run with SQL Express.  It works fine with SQL CE and Full SQL.  With SQL Express I have varying outcomes – very rarely, it WILL successfully create the database the first time, and then most of the time it will fail.  IF the database is already created, the code-first libraries work great.  When failing to create the database, I get an exception:

The underlying provider failed on Open.
Cannot open database "FootballDB" requested by the login. The login failed.
Login failed for user ‘{myAdminUsernameHere}’.

I’ve searched around, even posted questions to the forums, and emailed some MS guys.  No answer yet…

I’ll keep plugging away at it, or maybe one of you can point me in the right direction!  I haven’t tracked down exactly when they plan to release these bits – for now it’s just a CTP.  Hopefully this fires you up to go check it out!

Connection Strings

For reference, here are the connection strings I’m using for various connectivity methods:

SQL CE provider

<add name="FootballDB"        
    connectionString="Data Source=FootballDB.sdf"        
    providerName="System.Data.SqlServerCe.4.0"    
    />

SQL Express Provider (default SQL data directory)

<add name="FootballDB"        
    connectionString="
        Data Source=.\SQLEXPRESS;Initial Catalog=FootballDB;        
        Integrated Security=True;MultipleActiveResultSets=True;        
        User Instance=True;"        
    providerName="System.Data.SqlClient"    />

SQL Express Provider

(in the “App_Data” directory of an ASP.NET application)

<add name="FootballDB"        
  connectionString="Data Source=.\SQLEXPRESS;
      AttachDbFileName=|DataDirectory|FootballDB.mdf;        
      Initial Catalog=FootballDB;Integrated Security=True;        
      MultipleActiveResultSets=True;User Instance=True;"            
  providerName="System.Data.SqlClient"/>

Full SQL Server

<add name="FootballDB"        
  connectionString="Data Source=(local);Initial Catalog=FootballDB;        
    Integrated Security=True;"        
  providerName="System.Data.SqlClient"/>

What is RECESS ?

Here at InterKnowlogy, as a Microsoft Gold Partner, we pride ourselves on being able to keep up with the latest and greatest technologies, and to bring that breadth of knowledge and experience to the table for our customers.  As an example, the latest projects we’re working on are based on WPF, Silverlight, Surface, Windows Phone 7, and even iPad development.  As we all know, the pace at which new technologies come out of Microsoft and other industry leaders these days is crazy, so it becomes difficult to keep up. 

At IK, we have a perk we call RECESS

Research (and)
Experimental
Coding (to)
Enhance
Software
Skills

The company gives us some time each week to work on whatever we want – learn a new technology, write an app for a different platform, investigate the feasibility of some new pattern, catch up on new language features, etc.  It’s a great investment that IK makes in us to spend a few hours away from our current project, doing something completely different, and then share that knowledge amongst the rest of the company.  We got the idea from one of our devs working at Microsoft a couple years ago – that team had a similar program.

We don’t always end up with a finished “product”, maybe we just cut some sample code, read some articles, etc. but once in a while, we end up with some very cool stuff.  As an example, some of the actual software we’ve created during RECESS:

  • Surface Craps
  • Surface JukeBox
  • Wish 43
  • Surface Curling
  • Firebrick
  • Atlas (Virtual Earth on Surface)
  • 3D Boxes (Surface physics engine display)
  • Surface PixMatch (child picture matching game)
  • Surface YouTube viewer
  • Blackjack
  • . . .

Anyway, thought the RECESS concept is worth mentioning – I think it’s a very cool “feature” of working here. 

Well, … this afternoon is RECESS, so I have to get busy learning something new…

Visual Studio Installer Projects and “Previous Versions”

I’ve never really taken the time to learn the exact logic the VS installer projects go through when installing a “newer version” of software.  We just had the question come up here again, so I decided to investigate.  There is a ton of information about this online, Google proves that, so here is just my take on boiling it all down to my “cheat sheet”.

InstallerVersionsProductCode_05165EDA

Setup Project – Properties

Version – you should change this to match the version of the new software you’re building the installer for.  When you change it, you get the prompt that you should change the ProductCode.

  • If answer YES, then the ProductCode is changed (UpgradeCode remains the same – this is how it tracks different versions of the SAME software).  Now you have to worry about the RemovePreviousVersions and the file versions of each assembly you’re installing.
  • If answer NO, then the ProductCode is left the same, and users will get the “you must uninstall the existing version” prompt at install time.

The “cheap” way out here is answering NO.  This forces the user to uninstall any previous version and then they run your newest installer.  You are guaranteed (mostly) that all the new files will be laid down since the previous versions have already been removed.

RemovePreviousVersions – If this is True, the installer will first run the “uninstall function” on any previous versions (different ProductCode) of the same software (same UpgradeCode).

  • Uninstall function is pretty nebulous.  It sounds like it’s going to physically uninstall all the files, but it’s really not.   At least since VS2008, it does a “smart file compare”…
  • First let’s assume you’re installing to a new physical location (maybe a “versioned” directory structure).  In this case, the entire directories for old versions will be removed and the newest version will be copied to its specific directory.  That’s kind of a bummer to have versioned directories though (especially if you have data files laying around in that dir)
  • If you’re installing to the same directory as the previous version, then it get’s a bit more hairy.  When the new installer has files of the same name as the previous versions, it COMPARES FILE VERSIONS of those files to decide whether to overwrite or not.  So … you have to be super detailed about updating AssemblyInfo.cs AssemblyFileInfo atttributes on ALL assemblies in your software.  This can get painful if you’ve got many projects/assemblies involved.  If you’re in this boat, you definitely need some kind of build process to auto-increment the versions of all assemblies.

For now, my cop-out answer is to upgrade the installer project Version and answer NO to the prompt (keeps the same ProductCode).  Shift the pain to the user, right?!  :)

Up next: I’m working on an automated way to update all the AssemblyFileInfo attributes in a whole tree of projects, using the Community MS Build Task called “FileUpdateTask”.  Stay tuned…