Unit Testing ASP.NET WebAPI Controllers

Having just written some ASP.NET WebAPI controllers I then needed to accomplish the task of creating Unit Tests for them.  The tests would need to accomplish model validation via DataAnnotation and Json.NET attributes, authentication for all HTTP methods except GET (did I mention IIS was involved here) and in the process exercise an underlying SQL data layer (see UNIT TESTING USING LOCALDB).  I also wanted to write as few Unit Tests as possible and limit mocking but still exercise the whole pipeline.

Some URLs that helped me immensely were:

Some very important things I wanted to accomplish were:

  • Exercise the complete pipeline but without the use of IIS (in any flavor) and without SQLServer (although I guess you could argue that LocalDB is a version of SQLServer)
  • Write the unit tests using code similar to what I would be using in Production code.  This requirement meant I could not use some of the alternate methods for testing Controllers by using a Controller context or special code to make sure DataAnnotation validation was occurring.

The end result eventually boiled down primarily to a single method which makes use of an in memory HttpServer, a reference to the static WebApiConfig class from the Web project containing the controllers and some code to add Basic Auth as needed to the request.  The only configuration necessary in the Unit Test app.config was the <system.web><authentication/></system.web> information necessary to enable authentication and the <connectionString/> section for access to the LocalDB database used in the Unit Tests.

The main method looks like:

    protected static Tuple SendRequest(HttpMethod method, Uri uri, HttpContent content = null, 
	string username = null, string password = null)
    {
	HttpConfiguration config = new HttpConfiguration {IncludeErrorDetailPolicy = IncludeErrorDetailPolicy.Always};
	// The WebApiConfig class is from the Web project being tested.
	WebApiConfig.Register(config);
	HttpServer server = new HttpServer(config);
	using (HttpMessageInvoker client = new HttpMessageInvoker(server))
	{
	    using (HttpRequestMessage request = new HttpRequestMessage(method, uri.ToString()) { Content = content})
	    {
		if (!string.IsNullOrWhiteSpace(username) && !string.IsNullOrWhiteSpace(password))
		{
		    request.Headers.Add("Authorization",
			"Basic " +
			Convert.ToBase64String(
				Encoding.GetEncoding("iso-8859-1")
					.GetBytes(string.Format("{0}:{1}", username, password))));
		}

		using (HttpResponseMessage response = client.SendAsync(request, CancellationToken.None).Result)
		{
		    return new Tuple(response.StatusCode, response.Content as ObjectContent);
		}
	    }
	}
    }

This allowed me to write fairly simple code in the Unit Tests themselves while still allowing the Unit Test to exercise model validation, authentication, data access layer AND the Controller methods themselves such as:

    [TestMethod]
    [DeploymentItem(@"MyDatabase.mdf")]
    [DeploymentItem(@"MyDatabase_log.ldf")]
    public void GetSingleEntity_That_Does_Not_Exist_Should_Return_NotFound()
    {
	// TestInitialize should have cleared the database.
	Tuple result = SendRequest(HttpMethod.Get, new Uri("http://localhost/api/entity/1"));
	Assert.AreEqual(HttpStatusCode.NotFound, result.Item1);
    }

Unit Testing using LocalDB

 

I recently wrote some code with the typical data access layer as an interface to SQLServer.  The code I wrote doesn’t include a UI and has various operations occurring in its pipeline including authentication, model validation, data aggregation and so forth.  So now that the code is written it’s time for Unit Testing right?

Anybody that has had to write Unit Tests has at some point encountered issues with testing scenarios that include some kind of server.  A typical server issue when associated with a Unit Test is the need to build and/or reset the state of the server so that data is “just right” so that the Unit Test can perform its job.  That server might be IIS, SQLServer, SharePoint or something else but as soon as any server is introduced into the mix there is an immediate desire to pull back and start mocking things to remove the server issues. 

Unfortunately that means (when dealing with a database server) that stored procedures, table/column definitions and things like unique or check constraints, SQL data access layer and transaction/nested transaction commit/aborts won’t get tested.

Desiring to test these things along with everything else in the pipeline I thought that LocalDB might be the answer to my issues.  This article is about implementing that.

Generating/Updating the LocalDB Database

 

To start the process I created everything I need in a local SQLServer instance.  This makes it easy to implement and test since there are very good tools to do this.

Once I had all of it working in my local SQLServer I created a Visual Studio 2013 SSDT ‘SQL Server Database Project’.  The great thing about this project is how it makes it so easy to move changes from one database to another.  In my case, every time I made a change to the database, I needed to move the changes to a the VS database project (source control), SQL Azure database AND to my Unit Test LocalDB.  Doing all of this for any change takes less than 5 minutes each time.

One quirk I ran into with the ‘SQL Server Database Project’ is that schema compare wouldn’t connect to the LocalDB that was in my Unit Test project.  I eventually ended up leaving the LocalDB in the APP_DATA folder in a Web project where I would make the updates and then file copy the database into the Unit Test project.

The database files are stored at the root of the Unit Test project.  I would prefer to store them in a folder but idiosyncrasies with the [DeploymentItemAttribute] and changes in behavior on how the Unit Test runs in Release versus Debug caused me to just leave it in the root of the project and configure the [DeploymentItemAttribute] to also copy the database files into the root of the test location rather than a folder.

Configure the LocalDB Database

 

Mark the database files as ‘Content’ and ‘Copy Always’ and attribute each Unit Test with:

	[DeploymentItem(@"MyDatabase.mdf")]
	[DeploymentItem(@"MyDatabase_log.ldf")]
  

The connection string I left in the app.config for the Unit Test project and it presented the next challenge.  When a Unit Test is run the actual on disk location of the database file will vary.  Combine this with the need to use AttachDbFilename in the LocalDB connection string and you could create some interesting code pulling out connection string, figuring out directories and using string.Format to doctor the connection string before use.  However the location in the code that actually pulls the connection string from the configuration was deep within the SQL data layer and I didn’t want to try to modify the code to work with both Unit Test and Production.  Thankfully I found the answer to this at http://stackoverflow.com/questions/12244495/how-to-set-up-localdb-for-unit-tests-in-visual-studio-2012-and-entity-framework.  Combining using ‘|DataDirectory|’ in the connection string in the app.config with the following code in each test class solved that problem.

	[ClassInitialize]
	public static void ClassSetup(TestContext context)
	{
	    AppDomain.CurrentDomain.SetData(
		"DataDirectory",
		Path.Combine(context.TestDeploymentDir, string.Empty));
	}

Reset Database State for Each Test

 

So now the Unit Test can use the LocalDB but I also need to reset the database state before each Unit Test runs.  I could figure out a scenario like detaching the database (if it’s attached), recopying the database files and reattaching it before each test but I thought it would be easier to just use the same database and in [TestInitialize] I could just truncate all of the tables. 

Unfortunately all of the tables have identity columns, foreign keys, check constraints and all of the usual things you find in a database.  This meant I couldn’t run a SQL script in [TestInitialize] to just truncate all of the tables.

I then decided I’d delete all of the rows from each table and use DBCC CHECKIDENT to reset the identity columns so I could guarantee row ids in objects that were inserted into the SQL tables.  This led me down an interesting path.

Look at the documentation on DBCC CHECKIDENT and you’ll find the following in the documentation

image

The highlighted text is inconsistent with the behavior of SQLServer 2014 and LocalDB v11.0.  I didn’t test with any other versions of SQLServer or LocalDB so I don’t know if they have these issues as well but the actual behavior (you can decide for yourself which SQL is obeying the documentation as it’s still not clear to me) when using “DBCC CHECKIDENT(‘MyTable’, RESEED, 0)” after ‘DELETE FROM MyTable’ is:

  • SQLServer 2014 – The next row inserted has a row id of 1
  • LocalDB v11.0 – The next row inserted has a row id of 0!!!!

What?  I didn’t even know it was possible to have a Row ID of 0.  After many trials and tribulations and wondering if I needed to rethink using LocalDB I came up with the following SQL script that is run in [TestInitialize] (it runs before every test):

    BEGIN TRY
	BEGIN TRAN T1

	DECLARE @ID_TO_CHECK BIGINT

	DELETE FROM MyTable
	DBCC CHECKIDENT('MyTable', RESEED, 0)

	SAVE TRANSACTION T2
	INSERT INTO MyTable(Title) VALUES('test')
	SELECT @ID_TO_CHECK = MAX(MyTableId) FROM MyTable
	IF (@ID_TO_CHECK > 0)
	BEGIN
		ROLLBACK TRANSACTION T2
		DBCC CHECKIDENT('MyTable', RESEED, 0)
	END
	ELSE
		DELETE FROM MyTable

	-- Do more tables

	COMMIT TRAN T1
    END TRY
    BEGIN CATCH
	DECLARE @Error INT
	DECLARE @ErrorMessage NVARCHAR(4000)
	DECLARE @ErrorSeverity INT
	DECLARE @ErrorState INT

	SELECT @ErrorMessage = ERROR_MESSAGE(), 
		   @ErrorSeverity = ERROR_SEVERITY(), 
		   @ErrorState = ERROR_STATE()

	ROLLBACK TRAN T1
	RAISERROR(@ErrorMessage, @ErrorSeverity, @ErrorState)
    END CATCH

This makes sure that when the test runs the next row that is inserted into the table will have a Row ID of 1 (NOT 0!!).

Async or Asink?

More and more nowadays there is a push  to run more application code in an asynchronous manner to prevent blocking the UI thread and making the application unresponsive.  As the need for this type of programming becomes more prevalent thankfully the APIs to do so have also become easier to implement.  The last part (easier APIs) has led to more and more async code sprinkled through an application like the little candies on top of donuts.  Unfortunately the easier APIs also mean it’s easier to abuse the functionality by not properly implementing exception handling.

All of these async sprinkles can sink an application fast as code may be exploding left and right with the application user non the wiser.  From the users perspective everything seems like it’s ok (i.e. the app doesn’t abort or show error dialogs) but nothing seems to be working.

I’d like to take a moment to look at various ways to async a task and highlight the exception handling issues related to each.  To do this I am going to use the following program framework.  My goal each time is to first see whether an unhandled exception occurs and if so then to handle it appropriately.

	class Program
	{
		static void DoSomething(object state)
		{
			try
			{
				throw new InvalidOperationException();
			}
			catch
			{
				Console.WriteLine("Exception thrown in method DoSomething.");
				throw;
			}
		}

		static void Main(string[] args)
		{
			AppDomain.CurrentDomain.UnhandledException += CurrentDomain_UnhandledException;

			//
			// TODO: Insert async code that calls the method DoSomething
			//

			Console.WriteLine("Done.");
			Console.ReadLine();
		}

		static void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e)
		{
			Console.WriteLine("Entered UnhandledException handler.");
		}
	}

As I replace the “TODO:” with each type of async code I’ll “Start Without Debugging” in Visual Studio. 

If the program runs WITHOUT encountering an unhandled exception the window will look like the following.  This can happen in two different scenarios.  Either the program encountered an unhandled exception but the exception was gobbled up and nobody will ever know or we have implemented proper exception handling code to deal with the exception when it occurs.

image

If the program encounters an unhandled exception then the window will look something like the following in which the task will then be to implement proper exception handling code.

image

Background Thread Exception Behavior

Before we start into the meat of this article it’s important to note both the .NET Framework version and the existence of a specific configuration setting can change the behavior of an application when an exception occurs on a background thread.  For more information on this topic see http://msdn.microsoft.com/en-us/library/ms228965.aspx.

Briefly .NET 1.0 and .NET 1.1 allowed background threads to throw unhandled exceptions that WOULD NOT terminate the application.  The .NET Framework would terminate the thread itself but the application would be unaffected and more than likely the unhandled exception would go unnoticed.

For all other .NET Framework version this behavior can be reinstated (BAD IDEA!) using the application configuration setting:

	<configuration>
		<runtime>
			<!-- the following setting prevents the host from closing when an unhandled exception is thrown -->
			<legacyunhandledexceptionpolicy enabled="1" />
		</runtime>
	</configuration>

All of the code we use following this WILL NOT have the configuration setting enabled and will be using .NET 4.5.  That being said it very easy to get the same behavior without using the configuration setting on a newer .NET Framework version.  In fact it sometimes seems like this old style is still in effect.

System.Threading.Thread

Our first attempt will be to use the System.Threading.Thread object.  We’ll replace the // TODO: line with the code:

	// 1. Thread.Start
	Thread thread = new Thread(new ParameterizedThreadStart(DoSomething)) { IsBackground = false };
	thread.Start();

When this code is executed we see that an unhandled exception occurs.  Note that I set the IsBackground property to false. 

If this is set to true then no unhandled exception occurs.  What’s up with this?  I thought .NET Framework 2.0 and above didn’t swallow exceptions anymore?  Even though “technically” the .NET Framework does not swallow exceptions anymore, “effectively” it does UNLESS you go back at some point and “sync up” with the thread.  When using System.Threading.Thread this is done via a call to the Join method on the thread.  So if we modify the code to:

	// 1. Thread.Start
	Thread thread = new Thread(new ParameterizedThreadStart(DoSomething)) { IsBackground = false };
	thread.Start();
	thread.Join();

we see the proper behavior (that an unhandled exception occurs) whether the thread is a background thread or not.  When using this method to async something the only way to properly handle the exception is in the try/catch block in the DoSomething method.  Putting a try/catch block around the thread.Join() method does nothing even though with other async APIs that pattern works as we’ll see later.

delegate BeginInvoke/EndInvoke

The next attempt to async the DoSomething method will use the delegate async architecture.  A delegate provides the methods Invoke(), BeginInvoke() and EndInvoke().  Using the Invoke() method would be the equivalent of just calling the DoSomething method without any async wrapping.  A call to BeginInvoke() is what will cause the DoSomething method to run asynchronously.  To run this code replace the // TODO: line with the code:

	// 2. BeginInvoke/EndInvoke
	Action action = DoSomething;
	action.BeginInvoke(null, null, null);

We’re back to not seeing an unhandled exception.  We receive the message “Exception thrown in method DoSomething.” but that’s it.  No unhandled exception.  Just like with System.Threading.Thread we have to “sync up” with the thread at some point in order for the unhandled exception to be properly generated:

	// 2. BeginInvoke/EndInvoke
	Action action = DoSomething;
	action.BeginInvoke(null, action.EndInvoke, null);

and to properly handle the exception we can use the same method as with System.Threading.Thread and modify the catch block in the DoSomething method or we can use a try/catch block around the EndInvoke() like so:

	// 2. BeginInvoke/EndInvoke
	Action action = DoSomething;
	action.BeginInvoke(
		null,
		result =>
			{
				try
				{
					action.EndInvoke(result);
				}
				catch (Exception ex)
				{
					Console.WriteLine("Exception handled.");
				}
			},
		null);

Note that adding a Console.WriteLine() to the catch block isn’t REALLY properly handling the exception.  It’s just simulating proper handling code which would more than likely log the exception, possibly correct it and maybe even let it continue to percolate up the stack by using a throw; statement.

System.Threading.ThreadPool.QueueUserWorkItem

The next attempt to async the DoSomething method will specifically use the ThreadPool.  Some of the later methods use the ThreadPool implicitly but here we will do so explicitly.  To run this code replace the // TODO: line with the code:

	// 3. ThreadPool.QueueUserWorkItem
	ThreadPool.QueueUserWorkItem(DoSomething, null);

It’s refreshing to see that we don’t actually have to do any “sync up” with the thread.  And realistically you couldn’t even do it.  Regardless we find that in this specific instance the unhandled exception is properly generated.

To properly handle the exception we can use a method similar to that used with BeginInvoke/EndInvoke:

	// 3. ThreadPool.QueueUserWorkItem
	ThreadPool.QueueUserWorkItem(
		state =>
			{
				try
				{
					DoSomething(state);
				}
				catch (Exception ex)
				{
					Console.WriteLine("Exception handled.");
				}
			},
		null);

System.ComponentModel.BackgroundWorker

The next attempt to async the DoSomething method will use the BackgroundWorker class.  To run this code replace the // TODO: line with the code:

	// 4. BackgroundWorker
	BackgroundWorker worker =
		new BackgroundWorker();
	worker.DoWork += (sender, e) => DoSomething(e);
	worker.RunWorkerCompleted += (sender, e) => worker.Dispose();
	worker.RunWorkerAsync();

We find once again that in this initial case the unhandled exception is not properly generated.  With the BackgroundWorker there is no way to “sync up” without using things like status checking loops or ManualResetEvent.  We can however add code to properly handle the exception.

	// 4. BackgroundWorker
	BackgroundWorker worker =
		new BackgroundWorker();
	worker.DoWork += (sender, e) => DoSomething(e);
	worker.RunWorkerCompleted +=
		(sender, e) =>
			{
				if (e.Error != null)
				{
					Console.WriteLine("Exception handled.");
				}

				worker.Dispose();
			};
	worker.RunWorkerAsync();

System.Threading.Tasks.Task

The next attempt to async the DoSomething method will use the .NET 4 Task.  To run this code replace the // TODO: line with the code:

	// 5. Task.Factory.StartNew
	Task.Factory.StartNew(DoSomething, null);

Lately I’ve been finding that this is the most common pattern that somebody will use to async a method.  Most likely because it’s cool and it’s easy.  This particular pattern is why I wrote this article.  The problem is that this pattern, like the others, will allow the method called to throw an exception and the application code will be none the wiser.  Imagine 10, 20, 50, 100s of these spread throughout the code base all randomly aborting and the user is wondering why things don’t seem to work even though they see no obvious signs of problems, i.e. exceptions or error dialogs.

If we modify this code to “sync up” with the thread like so:

	// 5. Task.Factory.StartNew
	Task task = Task.Factory.StartNew(DoSomething, null);
	task.Wait();

We find that the unhandled exception is properly generated, however the task.Wait() like the thread.Join() kind of defeat the purpose of attempting to run the DoSomething method asynchronously.  To solve this problem use task chaining:

	Task.Factory
	    .StartNew(DoSomething, null)
	    .ContinueWith(
		    task =>
			    {
				    try
				    {
					    task.Wait();

					    // If the DoSomething method returned a result
					    // we could reference task.Result instead to
					    // trigger the exception if one occurred otherwise
					    // process the result.
				    }
				    catch (AggregateException ae)
				    {
					    ae.Handle(
							(ex) =>
								{
									Console.WriteLine("Exception Handled.");
									return true;
								});
				    }
			    });

In our case the DoSomething method is of type Action<object> and not Func<object, TResult>, i.e. it doesn’t return a result.  If the DoSomething method returned a result we could reference task.Result instead of using task.Wait() to “sync up” with the task and trigger the exception.  In this code we use the AggregateException.Handle method to handle the exceptions.  The .NET 4 task returns exceptions of type AggregateException.  How these differ from System.Exception is that they have an InnerExceptions (note that it’s plural) property as well as the inherited InnerException property.  An AggregateException can contain multiple exceptions (one from each task that was chained) and the AggregateException.Handle method will call the specified delegate for each of those exceptions.

Since the DoSomething method doesn’t return a result there is nothing to process after it’s done so we can tell the .ContinueWith task to only execute if a fault occurred like so:

	// 5. Task.Factory.StartNew
	Task.Factory
	    .StartNew(DoSomething, null)
	    .ContinueWith(
		    task =>
			    {
					try
					{
						task.Wait();
					}
					catch (AggregateException ae)
					{
						ae.Handle(
							(ex) =>
							{
								Console.WriteLine("Exception Handled.");
								return true;
							});
					}
			    },
		    TaskContinuationOptions.OnlyOnFaulted);

How to prove you are a novice .NET programmer (String Concatenations)

I freely admit that I still have a lot to learn with respect to .NET programming.  I figure in any endeavor the day you think you have finally figured it all out is the day when you have well and truly, totally deluded yourself.  There are many ways to determine that somebody (yourself or another) is a novice .NET programmer, but there are some things that truly seem to stand out for me.

String Concatenation

How string concatenations are coded has implications on raw code performance but far more importantly it has a HUGE impact on memory performance.  When the string concatenations are occurring within a web application they could cause recycling of the application pool.  Basically, with reference to memory performance, the more temporary objects code creates that need to be garbage collected the more often garbage collections have to occur and the more likely that the code could receive OutOfMemoryExceptions and/or the application performance suffer during the garbage collections.

There are basically two methods for concatenating strings in C#.  Use of the overloaded string concatenation operator ‘+’ and use of the StringBuilder class.  There are a variety of BCL class methods that concatenate strings like string.Join, but underneath they are usually using the StringBuilder class so I’ll ignore that permutation of this issue.

There is a good article at http://support.microsoft.com/kb/306822 documenting the raw code performance statistics in some sample code between the two methods of string concatenation.  The code sample used uses a loop to concatenate a string 5,000 times.  You might think this is an extreme example used to illustrate the issue.  It would be nice to think so, however I recently saw some code that used 40+ concatenations per record to generate a string for a data structure that consisted of approximately 20,000 records.  That comes to over 800,000 string concatenations.  Ok, forget the extreme examples, let’s look at the following code:

    data = "\"" + data + "\",";

Each time this line of code executes it will create two new strings and leaves two old strings around to be garbage collected at some point in the future.  Now do this in a loop 10 times or 20 times or 100 times.

  • String 1 = "\"" + data
  • String 2 = (String 1) + "\","
  • Throw away String 1
  • Throw away the original value of data
  • Keep String 2 (at least until the next concatenation occurs)

On the other hand the code:

    data = string.Format("\"{0}\",", data);

creates one new string and leaves the old value of data to be garbage collected at some point in the future.  Using the StringBuilder class does even better with the code:

    StringBuilder stringBuilder = new StringBuilder();
    stringBuilder.AppendFormat("\"{0}\",", data);

which doesn’t create any new strings (until the point that all concatenations are complete and a call is made to stringBuilder.ToString()) and leaves the old value of data to be garbage collected at some point in the future. There are some considerations when using the StringBuilder class in needing to use the StringBuilder(int) constructor overload to give StringBuilder a larger than default estimate on how big the string will get to reduce the amount of internal array allocations that are used to hold the parts of the once and future string.

Let’s look a little bit more at some more code that appears fairly regularly:

    string result = String.Empty;
    foreach (var item in items)
    {
        if (!String.IsNullOrEmpty(result))
        {
            result += ",";
        }
        result += item.Name;
    }

    return result;

If this code executes on a collection with just four values it will have created eight new strings, returning one and leaving 7 to be garbage collected.  This code could be rewritten in a couple of different ways:

Better:

    StringBuilder result = new StringBuilder();
    foreach(var item in items)
    {
        if (result.Length > 0)
        {
            result.Append(",");
        }
        result.Append(item.Name);
    }

    return result.ToString();

But why even have all that code? I love LINQ! And string.Join uses the StringBuilder class for us.

Best:

    return string.Join(",", items.Select(item => item.Name));

Learn to be Lazy<T>

Updated: 3/24/2013

 

It sounds funny but it’s a matter of fact that many people work very hard to be lazy.  There are a number of common patterns to this pursuit of…

A common one looks like:

    public static SchemaProvider Instance
    {
        get
        {
            if (_instance == null)
            {
                _instance = new SchemaProvider();
            }

            return _instance;
        }
    }

Ok, I admit it. I wrote that one in the .NET 3.5 days. Another one I’ve been seeing a lot of lately looks like:

    private DelegateCommand<EditCommandArgs> _editCommand;
    public DelegateCommand<EditCommandArgs> EditCommand
    {
        get { return _editCommand ?? (_editCommand = new DelegateCommand<EditCommandArgs>(EditCommandExecuted)); }
    }

Both of these have issues. 

One being that they are not thread-safe (although depending on how they are used they may never have a problem). 

I think an even more rudimentary issue is why these particular pieces of code are even using lazy instantiation.  As far as I understand it the main reason for lazy instantiation is that code needs to instantiate an object that is expensive either in the time it takes to construct and return the object or in the amount of memory it takes.  The hope being that maybe the code will never have to actually instantiate the object but it it does you don’t want application startup (for instance) to be delayed while the code attempts to construct these objects.

The code at the top could have just:

    private   static  SchemaProvider _instance =  new  SchemaProvider();

The second code block could have just set the variable _editCommand in the constructor and been done with it.

Just for the sake of argument, however, let’s say that there was a valid need to lazily construct some object, hopefully in a thread-safe way.

Good news!  As of .NET 4 the Lazy<T> class is available for all your lazy resource conservation needs.

An example is:

    private static readonly Lazy<IDataProvider> _dataProvider = new Lazy<IDataProvider>(() => new DataProvider(), LazyThreadSafetyMode.ExecutionAndPublication);
    public IDataProvider DataProvider { get { return _dataProvider.Value; } }

If you need even more control over the object creation just use a Func<TResult> delegate instead of a lambda:

    private static readonly Lazy<ILogger> _logger = new Lazy<ILogger>(GetLogger, LazyThreadSafetyMode.ExecutionAndPublication); 
    private static ILogger GetLogger()
    {
      ...
    }
    public ILogger Logger { get { return _logger.Value; } }

Jeffrey Richter covers this very thoroughly in his book ‘CLR via C#’, Chapter 30, ‘The Famous Double-Check Locking Technique’  http://www.amazon.com/CLR-via-Microsoft-Developer-Reference/dp/0735667454/ref=pd_rhf_gw_s_cp_8 showing that the simple patterns you see most don’t really work well if at all any way.  In addition he gives an excellent explanation of the LazyThreadSafetyMode enumeration values with examples.

Azure Table Storage Exceptions with Multiple Table Entity Schemas

 

I’ve been messing with Azure Table Storage recently and needed to create a somewhat nontrivial data model to try some things. 

This data model includes patients, patient addresses (email, IM, postal, phone, etc.) and patient events.  I also wanted to store all of this data in the same table so I had table entities with differing schemas in the same table.

I then wrote some code to created an array of patients and for each patient I added the patient record and a random number of different patient addresses to the table. 

The more I work with Windows Azure the more I come to the conclusion that debugging Windows Azure code is like a Doctor treating a patient, i.e. make random changes and see if that fixes the problem.  Trying to figure out what the problem is from the exception information is like looking in a crystal ball in that it just doesn’t show anything other than what you can imagine.

Executing the code results in what appears to be one of the most common exceptions there is when working with Windows Azure:

{Microsoft.WindowsAzure.StorageClient.StorageExtendedErrorInformation}
    AdditionalDetails: null
    ErrorCode: "InvalidInput"
    ErrorMessage: "0:One of the request inputs is not valid."

 

So my next issue is that I had added multiple records so which one was causing the problem?  To figure that out you need to enumerate the DataServiceRequestException.Response property.  You get an IEnumerable<ChangeOperationResponse> collection from this property.  From my experience so far there only ever appears to ever be one entity in this collection no matter how many records you create that would have had problems.  What you look for is a header by the name of “Content-ID” on the OperationResponse.  The value is the 1 based index of the records that were added before calling TableContext.SaveChangesWithRetries(SaveChangesOptions.Batch).

You can see this a lot better if you use Fiddler to view the input/output of the batch request.  See http://learningbyfailing.com/2009/12/using-fiddler-with-azure-devstorage/ to see how to get Fiddler to display the output.

In my case it kept pointing to the second record that I added (the first was the patient record and the second was an address record).  If I saved the patient record by itself it worked and if in a separate batch I saved the address records it worked.  So in spite of the exception pretty much not telling me what the problem was I came to the conclusion that I can’t mix table entity schemas in a single batch.  Apparently this is a restriction for development storage but will work when using cloud storage.

It’s too bad the exception didn’t tell me this.

Remember that Azure Tables have limited property datatype support

Recently I threw some code together to add objects into an Azure table.  I used the class:

	[DataServiceKey("PartitionKey", "RowKey")]
	public class OrderMessage : TableServiceEntity
	{
		public DateTime OrderDate { get; set; }
		public string CustomerName { get; set; }
		public string CreditCard { get; set; }
		public int Quantity { get; set; }
		public decimal CostEach { get; set; }
	}

Upon adding the data to the table using:

	TableServiceContext tableContext = connection.TableClient.GetDataServiceContext();
	tableContext.AddObject(connection.OrderTableName, message);
	DataServiceResponse response = tableContext.SaveChangesWithRetries();

I received the error:

<?xml version="1.0" encoding="utf-8" standalone="yes"?>

<error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">

<code>InvalidInput</code>

<message xml:lang="en-US">One of the request inputs is not valid.</message>

</error>

 

After wasting some time looking at help and Googling I was skimming across some documentation on tables and it happened to list the supported property types for Azure tables.  I knew that they had limited support but not till I looked at that list did it occur to me that I was using the unsupported datatype ‘decimal’.  Modifying the class so that CostEach was of type ‘double’ resolved my problem. 

It sure would be nice if the error was a little more explicit.  I’m sure that somewhere in the Azure code it knows what happened.  I also find it interesting that rather than returning information in the DataServiceResponse it throws an exception.  I don’t see this ability to throw exceptions in the documentation and in fact the documentation says that the return value is:

A DataServiceResponse that contains status, headers, and errors that result from the call to SaveChanges.

On well I guess somebody kinda forgot to update their XML comments on the method with:

/// <exception cref="System.Data.Services.Client.DataServiceClientException">A stealth exception that we won't tell anybody about</exception>

More than once I’ve seen a reminder on blogs to make sure you only use the supported data types on your table entities.   Here’s another reminder for you and *bonk* me!

Windows Azure error “There was an error attaching the debugger to the IIS worker process for URL ‘http://127.255.0.0:82/’…”

Since I last rebuilt my development machine I haven’t had a need to even look at web development let alone Windows Azure.  The last time I had “opportunity” to develop anything using Windows Azure was with version 1.3.  At the time version 1.4 was still in beta and I couldn’t seem to successfully install it.

Lucky me I was added to a project using Windows Azure so I installed version 1.6 along with the Windows Azure Platform Training Kit – November Update and decided to make a quick run through some of the training kit to see how things worked.

Much to my chagrin attempting to run the training projects only ever resulted in the dialog:

image

The first thing that popped out at me was the IP address ‘127.255.0.0’.  I immediately proceeded to look at project properties to figure out where this came from to no avail.  I then unloaded the projects and looked through the raw project and solution files to no avail.  Attempting to ping the address did succeed so I looked through my hosts file in ‘%windir%\System32\drivers\etc’.  Nope it wasn’t there either. 

Searching the Internet (hmmm, I wonder if the term ‘Binging’ will be added to the dictionary?) on the error message gave me a whole lot of nothing.  Refining my search to just the IP address sent me off on a tangent for blogs on Windows Azure v1.5 about the need to add entries to the hosts file, although they were helpful in educating me about what in the world the IP address was.  For more information on that see http://blogs.msdn.com/b/avkashchauhan/archive/2011/09/16/whats-new-in-windows-azure-sdk-1-5-each-instance-in-any-role-gets-its-own-ip-address-to-match-compute-emulator-close-the-cloud-environment.aspx.

Finally *bonk* I got the idea to look in the Windows Event log.  It should have been the first place I looked but I guess I hadn’t drank enough coffee yet to think straight.

I found two errors:

ISAPI Filter ‘C:\Windows\Microsoft.NET\Framework\v4.0.30319\\aspnet_filter.dll’ could not be loaded due to a configuration problem. The current configuration only supports loading images built for a AMD64 processor architecture. The data field contains the error number. To learn more about this issue, including how to troubleshooting this kind of processor architecture mismatch error, see http://go.microsoft.com/fwlink/?LinkId=29349.

and

Could not load all ISAPI filters for site ‘DEPLOYMENT16(11).WINDOWSAZUREPROJECT1.GUESTBOOK_WEBROLE_IN_0_WEB’.  Therefore site startup aborted.

That certainly gave me a good clue.  Why I need ‘Enable 32-bit applications’ on the application pool I have no idea since I’m compiling as ‘Any CPU’.  Compiling as x64 results in the same errors and compiling as x86 fails because I’m running on an x64 box and results in the dialog:

image

Every time the computer server starts it creates a new application pool with ‘Enable 32-bit applications’ set to false.  When the compute server shuts down it removes the application pool so manually resetting this value doesn’t help.  Searching around found http://blogs.msdn.com/b/zxue/archive/2011/10/31/enabling-support-for-32-bit-iis-applications-in-windows-azure.aspx.  Adding the startup task to set the IIS default to allow 32-bit applications solved my problems.  It really only needs to be run once but I just leave it in the project just in case.

Using Compiled Resources and Generating Debug Info with MASM32

I had been using the MASM32 SDK to get familiar with Microsoft Assembly.  I thought I’d just share a couple basic things that helped me in the process.

Using Compiled Resources

The MASM editor that is installed from the SDK is the ‘Small Memory Footprint Editor Quick Editor’.  Creating, compiling, linking and running MASM source code through this editor is a simple process.  It definitely makes you miss some of the tools that Visual Studio provides though.  One of which is the resource editor.  The good news is that you can create resource files (.rc) in a Visual Studio C++ project, munge them a bit (remove the C++ ishness like included .h files, embed the #defines into the .rc rather than have a separate .h file and such things as that) and use them in the compilation process to produce your final binary.

To start the process you need to use a menu option in the editor to generate a ‘makeit.bat’ file.  This file compiles the assembler files, compiles the .rc file and links everything for you.  To generate this file use the editor menu option:

Script –> Create EXE makeit.bat

This assumes of course that you are creating an .exe type binary.  If you’re creating something else then use the appropriate menu option.  The file assumes that your .rc file is named rsrc.rc.  So after you create your .rc file in Visual Studio then just copy it into your MASM project directory as the file name rsrc.rc.

The ‘makeit.bat’ file that is created looks like:

image

Generating Debug Info and Other Stuff

To cause MASM32 to also generate debug information when it compiles and links I simply edited this file and added command line parameters to both the compile (‘\masm32\bin\ml’ command line [/Zi /Zd /Zf]) and link step (‘mask32\bin\Link’ command line [/DEBUG /DEBUGTYPE:CV]) steps.  Note that I also added command-line parameters to the compile command to generate browse information (/FR), a map file (/Fm), an assembled code listing (/Fl) and generate warnings (/W3).

image

Now when I compile I get my assembled code listing (.lst), browse information (.sbr) and debug information (.pdb) files.

image

Configuring Visual Studio 2010 for Assembly Development

In my last blog entry “Assembling Old-School Skills” I bemoaned the fact (or what I thought the fact was) that Visual Studio 2010 ended up just being a glorified text editor when doing MASM development.

Buzzzz! Wrong!

You can in fact configure Visual Studio 2010 for use in MASM Assembly development.  The following list of steps are what is needed to accomplish this:

  • Create a Visual C++ “Win32 Console Application” project

image

  • Press ‘Next’, uncheck ‘Precompiled header’ and press ‘Finish’
  • Choose ‘Build Customizations…’ from the projects context menu and press ‘OK’

image

  • Delete all .cpp and .h files from the project
  • Add the file ‘Main.asm’ under ‘Source Files’
  • Add the following text from the tutorial at http://win32assembly.online.fr/tut2.html to ‘Main.asm’
        .386
        .model flat,stdcall
        option casemap:none

        include windows.inc
        include kernel32.inc
        includelib kernel32.lib
        include user32.inc
        includelib user32.lib

        .data
        MsgBoxCaption db "Iczelion Tutorial No.2",0
        MsgBoxText db "Win32 Assembly is Great!",0

        .code
        start:
        invoke MessageBox, NULL, addr MsgBoxText, addr MsgBoxCaption, MB_OK
        invoke ExitProcess, NULL
        end start
  • Now when you look at the project properties you will find ‘Microsoft Macro Assembler’ under the ‘Configuration Properties’

image

  • In the project properties edit the following:
    • Linker –> General
      • Additional Library Directories
    • Microsoft Macro Assembler
      • General
        • Include Paths – Add the path to your .inc files.  I’m using the ones from the MASM32 SDK.
      • Listing File
        • Assembled Code Listing File – Set to something similar to: $(IntDir)%(FileName).lst
  • Ignore the auto-correct squiggly at the beginning for the ‘PCH warning’.  It seems to be something to do with Visual Studio 2010 SP1 ‘PCH warning after installing SP1
  • Compile and run your code