Most of my development experience has been with desktop applications, so I have to admit my knowledge of web development is pretty light. I’ve had a chance this week to begin digging into ASP.NET MVC 4, along with the many cool features made available by hosting a site in Azure. Although I’ve maintained a cursory knowledge of both, I hadn’t had a chance to really try them out, and certainly didn’t feel comfortable taking on large projects in either (or even scoping such efforts). My exploration this week has left me much more at ease with both.
I was able to hit several topics this week:
- Basic MVC 4 development
- MVC routing
- MVC validation
- Code-first Entity Framework development
- Code-First EF Migration
- Using OAuth providers for for login credentials
- Deployment to an Azure Web Site
- Use of Azure SQL Database
I was impressed with how straightforward and sensible the MVC 4 approach is. The use of convention-based coordination of controllers and views makes it extremely simple to quickly set up a functional site. And the default plumbing created automatically by Visual Studio really helps a lot. I’m not sure if I’d yet be comfortable coding everything from scratch at this point, but the scaffolding VS puts in place makes it very simple to get started and customize as appropriate. Some tutorials I found particularly instructive are:
I’ve used Entity Framework quite a bit in the past, but have not had an opportunity to use the Code-First approach extensively. The second link above was also very instructive in this topic. Something I think would be particularly useful is Code-First Migrations, which allows you to update your data model as you develop without having to drop your existing database. Migrations generates methods to step your existing database up to the new model or down to the previous one. In the past I’ve found modifying my data model once there is an actual production database extremely tedious. Migrations should help greatly. I was disappointed that a command-line interface must be used to make use of migrations (with no Intellisense-like auto completion), since I’m sure it will take me a while to remember the correct steps and syntax. But revisiting the sites I mentioned earlier should make it simple enough.
I was also pretty new to the use of OAuth authentication. The following link made it very easy to add OAuth to my test app:
Having heard stories of Azure deployment being difficult in the past, I’ve been wary of it until now, but it looks like more recently Microsoft has done a lot to make the process as simple as possible. I was impressed with how easy it was to move a locally-hosted web site to Azure. Once you have your Azure account set up, it took almost no time at all to deploy to Azure. Moving a database to Azure SQL Database was equally as simple. I’m currently using my free trial on Azure, so I haven’t had to deal with monitoring real usage and being cautious to limit my charges, but I’ve heard this has been improved as well, and developers no longer seem to be incurring unreasonable charges accidentally. The following links were great for getting started:
Overall, I feel much more comfortable tackling the creation of a web application hosted in Azure. Microsoft has done an excellent job easing developers into these technologies, and they look like great solutions for well-architected, highly-scalable web solutions.
While working on a Windows Phone 7 application recently, I discovered some limitations of the platform that were rather frustrating. The application allowed audio to be recorded and played back, and it was expected that recording would pause if an incoming phone call was received on the phone. The app was already designed to pause recording when the app was deactivated, and initially I thought an incoming phone call would cause deactivation. I was surprised to discover that it did not. Instead, the app continued to run uninterrupted while a phone call was taken. If the app was recording at the time, recording would continue throughout the phone call. Interestingly, the XNA
Microphone class appeared to be muted during the phone call, with that portion of the recording being blank, neither side of the conversation captured.
A little searching confirmed that an incoming phone call does not deactivate a WP7 app. Instead, the app is “obscured”, and the only way to detect the phone call is by subscribing to the
Obscured event of the
PhoneApplicationFrame class. Unfortunately, the
Obscured event simply indicates that something from the OS is obscuring the visual of your app (“the shell chrome is covering the frame”, in the words of the MSDN docs), without providing any detail about the source cause. As a result, many things can cause
Obscured to fire, including an incoming phone call, locking of the screen, an incoming text message, a calendar event notification, and a
MessageBox. There are no events fired specific to the different types of OS interruptions that can occur, nor any properties in the
ObscuredEventArgs to indicate the source cause. There is an
IsLocked property within
ObscuredEventArgs that can be used to detect if locking of the screen caused
Obscured to fire, but otherwise no other useful information is given.
This presented a problem for our application, and I could imagine it being a problem for others as well. In our app we wanted to handle incoming phone calls differently than other types of OS interruptions, such as incoming text messages or calendar event notifications. The former should pause recording, while the latter should not. (Pausing recording in response to something as innocuous as a calendar notification would be an awful experience within our app, as the user could easily be unaware recording had stopped, causing potentially important audio to be missed.)
Even the ability to discriminate a screen lock as the source of the
Obscured event is of limited value. We’ve set up our app to continue running (and recording) under lock screen, and at first glance it seems like it would be helpful to ignore the
Obscured event if a screen lock was the cause. Unfortunately, once the screen is obscured, no other OS interruptions fire the
Obscured event until the screen is once again unobscured. Although we did not want to pause recording in response to the lock screen, we did want to pause recording in response to an incoming phone call, including when the screen was locked. The
Obscured event is not fired in response to the incoming phone call if the screen is locked, however. It would be impossible for us to have consistent behavior both while the screen is locked and unlocked.
As a result of these limitations, we decided to ignore the
Obscured event altogether, and simply allow recording to continue while a phone call was taken. Most users will likely dismiss incoming phone calls while recording, but if they take the call, at least their conversations will not be recorded. (Instead a blank gap appears in the recorded audio.) Hopefully future versions of WP7 offer more information about OS interruptions, so apps can respond in an intelligent manner dependent upon the source of the interruption.
If you’ve ever created a Windows Phone 7 application, you’re probably aware that before submitting for Marketplace approval you must enable only those “capabilities” that are actually being used. Capabilities required by your app must be listed in the WMAppManifest.xml file, found in the Projects folder of your UI project. For new solutions, all capabilities are by default included, as shown below:
<Capabilities> <Capability Name="ID_CAP_GAMERSERVICES"/> <Capability Name="ID_CAP_IDENTITY_DEVICE"/> <Capability Name="ID_CAP_IDENTITY_USER"/> <Capability Name="ID_CAP_LOCATION"/> <Capability Name="ID_CAP_MEDIALIB"/> <Capability Name="ID_CAP_MICROPHONE"/> <Capability Name="ID_CAP_NETWORKING"/> <Capability Name="ID_CAP_PHONEDIALER"/> <Capability Name="ID_CAP_PUSH_NOTIFICATION"/> <Capability Name="ID_CAP_SENSORS"/> <Capability Name="ID_CAP_WEBBROWSERCOMPONENT"/> <Capability Name="ID_CAP_ISV_CAMERA"/> <Capability Name="ID_CAP_CONTACTS"/> <Capability Name="ID_CAP_APPOINTMENTS"/> </Capabilities>
If you include capabilities that are not actually used in your app, your submission to the marketplace will fail. Fortunately, the Marketplace Test Kit (found via the context menu of the UI project in Visual Studio) can be used to verify what capabilities are required. The “Capability Validation” result of the “Automated Tests” lists all capabilities used within you app. The result below shows two capabilities used by an app.
Unfortunately, if you do not declare a capability that your app does actually use, you can easily run into trouble. The automated tests of the Marketplace Test Kit will list all the capabilities that you make use of, but they will not indicate that you have missed a required capability. The test will also still indicate a “passed” status, even though you have not declared a required capability. Instead, functionality within your app can simply break, with no clear indication of the cause.
It’s usually best to leave all of the capabilities listed in the manifest file until you’re near the completion of your development, and then use the Marketplace Test Kit to determine which ones you can remove. Since this is the typical development process, it’s unusual to encounter a problem. If you’re starting with existing code, however, this can be the cause of a real headache. Perhaps you’re returning to a solution you worked on earlier, in order to add new functionality. Or perhaps you’ve started with a sample solution you came across online, expanding it with additional functionality. In either case, if you add code that requires a capability no longer included in the manifest, you may end up chasing down bugs that don’t make much sense.
Recently I was exploring the possibilities of recording and playing back audio on WP7. A quick search led me to a great starting point, in the form of a demo app for a WP7 voice recorder (http://www.codeproject.com/Articles/175122/Making-a-Voice-Recorder-on-Windows-Phone). That solution uses the Microphone class of the XNA Framework for recording, and the XNA SoundEffect class for playback. Although the SoundEffect class offers the interesting ability to adjust the pitch of the sound, it does not provide a way to specify the temporal position within the audio file. I wanted a scrub bar to allow the user to move the playback position arbitrarily. Switching to a MediaElement for playback would allow me to do this.
After making the appropriate changes, I was disappointed to find playback wasn’t working. Subscribing to the MediaFailed event of the MediaElement revealed that a failure was occurring as soon as its source was set. (The error message was the fairly useless “3100 An error has occurred”.) After much digging around, I finally realized that MediaElement requires another WP7 capability (“ID_CAP_MEDIALIB”), and by introducing it into my project I needed to add the capability to my manifest file as well.
The lesson I learned is that if I’m starting with an existing WP7 solution and plan on making extensive changes, I should first visit the manifest file and make sure all capabilities are included. This could prevent some serious debugging headaches. Also, if I encounter bugs that don’t make much sense, it’s a good idea to run the automated tests of the Marketplace Test Kit, and ensure all capabilities listed in the results are included in the manifest file. Only when I’m close to submitting for approval do I remove unnecessary capabilities from the manifest (and I comment them out, rather than remove them, making it easier to add them all back in if I revisit the code at a later time).
I’m a big fan of NUI applications, which makes working at InterKnowlogy a great fit. Here at IK we’ve done extensive work with touch applications, Microsoft Surface, and Kinect, for example. There is not always a clear definition, however, of what makes a UI a NUI. What exactly do we mean when we say “Natural” User Interface? The term NUI is usually used to indicate the use of more natural interaction with the virtual objects displayed on the screen. Rather than using the abstraction of a mouse to click on objects, we might instead use a touch screen and let a user actually touch the object with a finger, as if it were real. Or we might let Kinect observe a user’s motions, driving interaction and interpreting complex gestures. New methods of physical interaction with the computer are certainly part of making a user’s experience more natural, but just as important are ways of strengthening the analogy of the virtual objects to real world counterparts.
In the early days of computing, the conceptual objects of an application were clearly distinct from any real world counterpart. Applications were text based, and the user was left to use his imagination to picture these concepts as real objects. Applications eventually became more visual, and the conceptual objects within them became increasingly represented visually as real world counterparts. Documents looked like paper, recipes were shown on virtual index cards, and reminders were shown as post-it notes. With the advent of XAML-based UIs, designers have been given incredible power to visualize data in creative ways. Data can now be represented elegantly, sometimes modeling concrete real world objects, and sometimes modeling more abstract concepts. But in either case, a data object is frequently treated as if it has physical presence, even if only on the screen. Users increasingly expect to be able to interact with the objects similarly to how they would interact with real world objects. Give a toddler an iPad, and he’ll quickly figure out the basic modes of interaction, because the objects react as you would expect them to. This is one way that I would define NUI: the objects in the application act and react as you would expect them to, based on your experience in the real world.
A topic that has captured my interest is the importance of animation within effective NUI. Animation used inappropriately is annoying at best. But used effectively, it can bring an application to life, making the objects behave more naturally. The best use of animation, in my opinion, is when it aids in keeping a user oriented within the application. In the real world, objects don’t disappear from one location, reappearing elsewhere instantly. Instead they move smoothly from one location to another, and their progress through time helps us to understand the transition. In applications, however, we routinely expect a user to understand what has happened when objects have jumped instantly from one spot to another. (Unfortunately, this is still typically true of applications that are described as NUI apps.) Instant changes to data layout are even more disorienting when they are driven by sources other than the user (perhaps by a remote user with whom you’re interacting, or by live changes to the data from an external feed). If the user makes a change himself, he at least will understand why the layout of data suddenly changed, even if he is not entirely certain what moved where. At best, instant changes to data layout cause a user’s experience to lose some of the NUI feel, because the illusion of working with real objects is shattered.
WPF offers some nice support for animations, from the use of storyboards to drive dependency properties, to Visual State Managers for organizing and automating the transitions required to morph data from one visual representation to another based on logical states. But WPF is lacking other basic support for NUI animations. It’s still the norm to see data jump instantly from one location to another, whether as a result of moving from one collection to another, or from the re-ordering of data within a collection, or from the introduction or deletion of data within a collection, or as the change in a detail view when selection in a master view is changed. Animating these types of changes is still often difficult, with limited support provided. And yet it is this type of animation that is, in my opinion, most important for a good NUI experience, because these are the most common changes to the data themselves. It is data that the user is most interested in staying oriented to, so we should strive to give visual cues that make changes to data understandable and relatable. Ideally, adding support for smooth (“fluid”) transitions of data layout would be simple, with the appropriate XAML simply indicating this preference declaratively.
There have been a few great starts towards this goal in XAML, but much more work is required. (Specifically, the FluidMoveBehavior and FluidLayout features in Blend are very powerful. See here for a great summary.) In future posts I will describe the attempts that Microsoft has made towards supporting such functionality, summarizing their strengths and limitations. (Examples of limitations are the inability to produce expected results when dragging-and-dropping data, and the lack of support for rotations in the transitions.) I will also discuss my own ongoing attempt to write a Fluid Layout library that provides a much richer solution to this challenge. Check back for future posts, and hopefully I’ll have some useful discussion and code to share soon!