About the Author

Paul is currently a software developer at InterKnowlogy, a leading edge developer of custom software solutions on the Microsoft platform, and an alumni of Neumont University with a Bachelors of Computer Science. Professionally at InterKnowlogy, Paul is responsible for developing and maintaining cutting edge applications for various clients. This primarily includes work with XAML (Which involves both WPF, Silverlight, and Windows Phone 7), C#, and SQL in addition to a large number of related or supporting technologies and tools. In his spare time Paul writes code, blogs, speaks at various events, takes pictures, attempts to surf, goes hiking, plays Starcraft II, spends time exploring southern California, helping out with his church, and hanging with friends and family. Life rocks.

CSS 3D Clouds Retake

This post is going to cover my experience following through the tutorial on making CSS 3D clouds posted here: http://www.clicktorelease.com/tutorials/css3dclouds/. I didn’t make the original code, but I did run into several issues while I went through and I wanted to share my experience, work-arounds, and pieces of code that were missing from the original tutorial.

All the questions that came up and fixes changes here were done on the Chrome Beta (v20.0.1132.41 beta-m to be exact)

1. Creating the world and a camera

CSS 3D Clouds Step 1

In this step, you create two div’s in the body of your HTML 5 page, the outer <div> gets an id of viewport and the inner <div> gets an id of world. From there you setup some structural styling (viewport spans entire screen via absolute positioning, world is centered in the middle)

Initial Page Setup

Some initial styling needs to be done to make it look like the demo, here’s what I have:

* {
	box-sizing: border-box;
	margin: 0;
	padding: 0;
}

body {
	overflow: hidden;
}

#viewport {
	background-image: linear-gradient(bottom, rgb(69, 132, 180) 28%, rgb( 31, 71, 120 ) 64%);
	background-image: -o-linear-gradient(bottom, rgb(69, 132, 180) 28%, rgb( 31, 71, 120 ) 64%);
	background-image: -moz-linear-gradient(bottom, rgb(69, 132, 180) 28%, rgb( 31, 71, 120 ) 64%);
	background-image: -webkit-linear-gradient(bottom, rgb(69, 132, 180) 28%, rgb( 31, 71, 120 ) 64%);
	background-image: -ms-linear-gradient(bottom, rgb(69, 132, 180) 28%, rgb( 31, 71, 120 ) 64%);
}

#world {                    
	background-color: rgba(255, 0, 0, .2);
}

Vendor prefixes

I’m so used to Chrome being the “latest and greatest” that I honestly expected to be able to use non-prefixed CSS properties and have the code “just work”. That’s NOT the case. As of this writing you will need to use prefixed properties, so add the following prefixes into the appropriate rules:

#viewport {
	perspective: 400;
	-webkit-perspective: 400;
	-moz-perspective: 400;
	-o-perspective: 400;
}

#world {                    
	transform-style: preserve-3d;
	-webkit-transform-style: preserve-3d;
	-moz-transform-style: preserve-3d;
	-o-transform-style: preserve-3d;
}

Help, my javascript code doesn’t work!

You probably put your javascript in the <head> tag, which means that document.getElementById( 'world' ) will not work because the elements don’t exist yet. Put the script at the end right before the </body> tag and it should work if everything else is correct.

Besides, it’s just good practice to put your javascript last.

Help, my javascript code doesn’t work! (pt 2)

This just shows my ignorance of javascript, but if something still isn’t working, you might have this problem:

Javascript uses the \ character in your strings to tell the parser to treat the next line as if the string continued:

'translateZ( ' + d + 'px ) \
rotateX( ' + worldXAngle + 'deg) \
rotateY( ' + worldYAngle + 'deg)';

Is the same as:

'translateZ( ' + d + 'px ) rotateX( ' + worldXAngle + 'deg) rotateY( ' + worldYAngle + 'deg)';

Zooming javascript

The code samples in the original tutorial omit the code to zoom in and out with the mouse wheel. Here it is in all it’s javascripty wonderfulness:

window.addEventListener( 'mousewheel', onContainerMouseWheel );
window.addEventListener( 'DOMMouseScroll', onContainerMouseWheel );

function onContainerMouseWheel( event ) {
	event = event ? event : window.event;
	d = d - (event.detail ? event.detail * -5 : event.wheelDelta / 8);
	updateView();
}

2. Adding objects to our world

CSS 3D Clouds Step 2
E.g. .cloudBase.

Create cloud base code is incorrect

Instead of:

for( var j = 0; j <<; 5; j++ ) {

the correct line is:

for( var j = 0; j < 5; j++ ) {

Actual random numbers and prefixed transforms

The random numbers for createCloud():

var random_x = 256 - ( Math.random() * 512 );
var random_y = 256 - ( Math.random() * 512 );
var random_z = 256 - ( Math.random() * 512 );

The transform styles for createCloud()

div.style.transform = t;
div.style.webkitTransform = t;
div.style.MozTransform = t;
div.style.oTransform = t;

3. Adding layers to our objects

CSS 3D Clouds Step 3
There were a couple of things in this section that cause me to scratch my head and go whyyyyy?

Code for random generation and transforms.

Random variables:

var random_x = 256 - ( Math.random() * 512 );
var random_y = 256 - ( Math.random() * 512 );
var random_z = 100 - ( Math.random() * 200 );
var random_a = Math.random() * 360;
var random_s = .25 + Math.random();
random_x *= .2; random_y *= .2;

Vendor transforms:

cloud.style.transform = t;
cloud.style.webkitTransform = t;
cloud.style.MozTransform = t;
cloud.style.oTransform = t;

Why don’t I see the new squares?

You have to add in the style for .cloudLayer into your CSS:

.cloudLayer {
	position: absolute;
	left: 50%;
	top: 50%;
	width: 256px;
	height: 256px;
	margin-left: -128px;
	margin-top: -128px;
	background-color: rgba( 0, 255, 255, .1 );
	-webkit-transition: opacity .5s ease-out;
	-moz-transition: opacity .5s ease-out;
	-o-transition: opacity .5s ease-out;
}

I see the cloud layers, but why are they are all flat?

Yeah, this got me too, the parent div’s need to have preserve-3d, so add this into your CSS:

#world div {
	transform-style: preserve-3d;
	-webkit-transform-style: preserve-3d;
	-moz-transform-style: preserve-3d;    
	-o-transform-style: preserve-3d;    
}

4. Making the 3D effect work

CSS 3D Clouds Step 4

This section is essentially “make the layers point at the camera”. You still want them projected into the same locations, but you want them to always face the camera, giving you that sense of “volume” effect.

Vendor Transforms and Update()

First, here’s all the vendor transforms:

layer.style.transform = t;
layer.style.webkitTransform = t;
layer.style.MozTransform = t;
layer.style.oTransform = t;

Now, you also need to call this update manually once right before the end of your script. So right before the closing script tag, make sure you call this:

update();

Render Loop

Finally, even if you do this, you’ll notice that your layers still don’t point at the camera. You need to add in a function that loops and updates the layers in the #viewport at regular intervals. You could add a call to the update function inside your mouse move event, but we’ll need the loop to get the rotation to work in the next step, so it’s better if you just do this now.

(function() {
	var lastTime = 0;
	var vendors = ['ms', 'moz', 'webkit', 'o'];
	for(var x = 0; x < vendors.length && !window.requestAnimationFrame; ++x) {
		window.requestAnimationFrame = window[vendors[x]+'RequestAnimationFrame'];
		window.cancelRequestAnimationFrame = window[vendors[x]+
			'CancelRequestAnimationFrame'];
	}
	
	if (!window.requestAnimationFrame)
		window.requestAnimationFrame = function(callback, element) {
			var currTime = new Date().getTime();
			var timeToCall = Math.max(0, 16 - (currTime - lastTime));
			var id = window.setTimeout(function() { callback(currTime + timeToCall); }, 
				timeToCall);
			lastTime = currTime + timeToCall;
			return id;
		};
	
	if (!window.cancelAnimationFrame)
		window.cancelAnimationFrame = function(id) {
			clearTimeout(id);
		};
}())

Lets break this down. I discovered that this function is a polyfill for the browser animation spec here one Paul Irish’s awesome blog http://paulirish.com/2011/requestanimationframe-for-smart-animating/. Previously, when you would animate something, you would set a timer, and every few milliseconds go and update move something from point A to point B on the screen with a small increment. In order to get smoother animations, the browsers are starting to support this requestAnimationFrame function that allows several changes to be made, and update everything with a single reflow / redraw of the screen. This becomes especially handy when you are updating multiple elements on the screen and you want a clean responsive display. It also means that the browser can stop animating when you switch tabs, which means that you don’t eat up battery when someone isn’t looking at your page 🙂

All you really need to know is that this creates a function on the javascript window object that tells the browser to “please render an animation frame, and call back to this function when you are done and ready to render the next frame”.

5. Final words

CSS 3D Clouds Step 5

Subtle Rotating

Not mentioned, but if you want the clouds to slowly rotate like the demo, you need to add in the rotate z component into the transform update of your layers like so:

'rotateZ( ' + layer.data.a + 'deg ) /

And, you need to add in a speed property into your cloud layers when you create them:

speed: .2 * Math.random() - .1

Adding Cloud Images

Instead of this:

var cloud = document.createElement( 'div' );

Use this (or find your own cloud .png, it should be transparent to work properly)

var cloud = document.createElement( 'img' );
cloud.style.opacity = 0;
var src = 'http://www.clicktorelease.com/tutorials/css3dclouds/cloud.png';
( function( img ) { 
	img.addEventListener( 'load', function() {
		img.style.opacity = .8;
	} )
 } )( cloud );
cloud.setAttribute( 'src', src );

And finally to remove all the debug styles remove the following lines out of your CSS:

From #world remove:

background-color: rgba(255, 0, 0, .2);

From .cloudBase remove:

background-color: rgba( 255, 0, 255, .5 );

From .cloudLayer remove:

background-color: rgba( 0, 255, 255, .1 );

That should cover it! Now go make some happy clouds.

Credits

Everything here was completely taken from the Click To Release CSS 3D Clouds tutorial and expanded to include the missing parts.

Releasing: Comatose WordPress Theme

This has been in the pipe a long time, it started simply as a set of experiments playing with HTML 5, layouts, responsive design, page structure, typeography, displaying images on mobile screens, the mobile web, minimalism, and my own dislike with the previous design(s) of my site. Finally, I decided to turn it into a full fledged WordPress theme. This is the result. It powers my site, and the InterKnowlogy development blogs where I work.

So here is the low down:

  • HTML 5 layout with CSS 3 styling
  • Responsive Layout
  • Clean, minimalist design
  • Print style
  • Clean, readable, and consistent text styling.
  • Clean and readable typography.
  • Clean display of code.
  • Beautiful images
  • Images display at native resolution on Regular and High Density screens
  • Widget enabled sidebar support
  • Nested comment support
  • Gravatar image integration for comments (High resolution images for High Density screens)
  • Custom menu support
  • Nested sub-sub-sub menu support
  • Graceful degradation in older browsers

But don’t take my word for it. Download links, versions and release notes are all on the project page:
http://www.paulrohde.com/development/comatose/

And all of the source code is hosted on GitHub.

Desktop Screenshots

Comatose Windows with Chrome showing Sidebar

iPad Retina Screenshots

Menu

Comatose iPad Retina Menu

See more screenshots

Providing integrity for Encrypted data with HMACs in .NET

Once again I’m about to dip into and scratch the surface of Cryptography. So here’s the disclamer: This is not my job. I don’t do this for a living. Don’t ever make up your own encryption algorithms. Try not to write your own cryptographic code. Don’t take anything that I say as the advice of a security expert or legal advice or assume that any code in this post or any linked posts are correct in any way shape or form. If you have a project that requires cryptographic security, I suggest you find someone who has been doing this far longer than I have to write the code for you. Then have several 3rd party security firms review or write your security code. In short, I’m not responsible for your mistakes, or the correctness of the code or writing presented here.

As even more of a taste of just how hard it is, I admit that I thought the last article I wrote covered most of the basics of modern encryption in .NET pretty well. The result? Nope. Missed something big. See, cryptography is hard not because it’s hard to say “take a password plus this data and make it so someone can’t see the data without the password”, but because of all the ways someone could get access to your password, or access your data when its not encrypted, or access your computer when it’s unlocked, or crack your password if you pick a bad password to begin with. Take a look at this reddit thread article that nicely rips apart my previous article. In all seriousness, I really appreciate the feedback, especially when it comes to security because it’s hard, and the devil is in the details.

I forgot to cover data integrity in my last article. Encryption is not enough. Redditors referred to it as authentication, which is one of the uses of HMAC’s, but authenticity is not whats important in the example case I presented for my original article. We are interested in “How to detect that someone or something accidentally or maliciously changed the encrypted data”, essentially, we’re interested in cryptographic data integrity. In addition to that there were a few other things wrong with the article that the redditors pointed out, such as my salt values being overly large, just picking CBC without covering other chaining methods, not discussing how to remove keys from memory, and referencing Jeff Atwood. The long and the short of it, they’re completely right. Getting crypto correct is hard, and good crypto systems are worth millions of dollars. You’re probably not getting paid that much to write a little encryption. So use a library thats already been written and provides high level abstractions and don’t write crypto code yourself.

Alright. HMAC, what is it? The MAC stands for a Message Authentication Code, the H stands for Hash. Put it all together and you have a Hash-based Message Authentication Code. It’s a hashing function thats deliberately designed to resist malicious tampering. The key is preventing malicious tampering. A normal hash function (e.g. MD5 or SHA1) would detect accidental byte tweaks, but somebody maliciously tampering with your data could tinker with it, create a new hash code for the data, replace the old hash code, and you would never be the wiser. For this reason, Message Authentication Codes generally fall into the category of keyed hash algorithms since they use a key or derived key and mix it with a hashing or encryption function to produce a value that an attacker can’t reproduce, providing both data integrity, and (if you happen to be in a sender and receiver role where both parties share a key) authentication. There are however, a couple of things to take into account. First, we shouldn’t use the same key twice to encrypt and hash the data. The more times the same key is used, especially against a known piece of data, the more likely an attack can be developed and used to figure out our key. Arguably, the final key used for encryption and the final key used for the message authentication should be different, as different as possible. The best way to do this would be to append or change the key such that both the encryption key and authentication key run through the KDF (Key Derivation Function) from different starting points. As an example, consider the following:

// Derive the passkey from a hash of the passwordBytes plus salt with the number of hashing rounds.
var deriveKey = new Rfc2898DeriveBytes(password, passwordSalt, 10000);
var deriveHMAC = new Rfc2898DeriveBytes(password, hmacSalt, 10000);
// This gives us a derived byte key from our passwordBytes.
var aes256Key = deriveKey.GetBytes(32);
var hmacKey = deriveHMAC.GetBytes(32);

Because the hash function mixes the password with the salt, and because we have different salts, after only one round the derived keys will already be different on account of the salt value. So we have a derived key. One for the actual encryption, one to prevent tampering and provide data integrity.

On a side note, it’s arguably better to authenticate the encrypted output (encrypt then authenticate) rather than authenticate the plaintext, then encrypt, or authenticate and encrypt. Again, I know it’s beating a dead horse, but cryptography is hard, and ultimately, the security of a system is going to depend on the security of the entire system, not just the individual parts. So. We have an HMAC key, we have an encryption key, and we know that we want to encrypt, then authenticate the encrypted output. In addition, we want to make sure anything else that could easily be tampered with is also authenticated, such as our Initialization Vector, since any change to it can easily affect our decrypted output. Another small advantage thats almost not worth mentioning is that by authenticating the encrypted output instead of the plain text is that we can detect if anything has changed even before we start decrypting the text. So here’s an example. In this case, I chose to use HMACSHA1, there’s others on the MSDN but I chose this particular one since it uses the same hash algorithm used internally by the KDF I used in my previous post, Rfc2898DeriveBytes, aka (PKDF2).

var hmac = new HMACSHA1(hmacKey);
var ivPlusEncryptedText = iv.Concat(cipherTextBytes).ToArray();
var hmacHash = hmac.ComputeHash(ivPlusEncryptedText);

In this case, we’re using our derived hmacKey, and we’re computing the hash of both the initialization vector concated with our encrypted ciphertext. That gives us everything we need to have a self validating “package” of data that is secured and can’t be tampered without us knowing unless the attacker knows our key or can break AES256 encryption, but at that point this whole discussion is pointless.

With decryption, remember how I said we compute the Encryption and HMAC key separately? If we did that, and if we computed the hmac over the encrypted data, we can perform the validation step on the data before we compute our decryption key. The only reason we would do this is so that if the data is invalid or has been tampered with we don’t take the time to also compute the decryption key. Small things, but I wanted to explain why the key computation for the encryption and hmac is kept separate:

var deriveHmac = new Rfc2898DeriveBytes(password, hmacSalt, 10000);
var hmacKey= deriveHmac.GetBytes(32);
var hmacsha1 = new HMACSHA1(hmacKey);
var ivPlusEncryptedText = ivBytes.Concat(encryptedBytes).ToArray();
var hash = hmacsha1.ComputeHash(ivPlusEncryptedText);

if (!BytesAreEqual(hash, hmac))
   throw new CryptographicException( "Your encrypted data was tampered with!" );

var deriveKey = new Rfc2898DeriveBytes(password, passwordSalt, 10000);
var aes256Key = deriveKey.GetBytes(32);

using (var transform = new AesManaged())
{
   using (var ms = new MemoryStream(encryptedBytes))
   {
      using (var cryptoStream = new CryptoStream(ms, transform.CreateDecryptor(aes256Key, ivBytes), CryptoStreamMode.Read))
      {
         var decryptedBytes = new byte[encryptedBytes.Length];
         var length = cryptoStream.Read(decryptedBytes, 0, decryptedBytes.Length);

         var decryptedData = decryptedBytes.Take(length).ToArray();
      }
   }
}

So, there you go. Basic explanation about why HMAC’s are important, what I missed, some code, and the disclaimer to write security code at your own risk. Full demo code, demo code output, and a bunch of random links after the break.

See the code, output, and related links

More than you ever wanted to know about AES Crypto in .NET

First, before I begin anything, I want to point out that cryptography is hard. REALLY hard. There are so many possible points of failure, and those points of failure and methods of attack can change depending on what the purpose of encrypting the data is. If you do it wrong, you get lawsuits, and/or you end up on the front pages of major news organizations for data security breaches. Or a rival nation gets your top-secret plans to rule the world. You get the idea. I’ll touch on some of the technical points of failure briefly; I’m mainly going to be exploring this topic through a fairly simple scenario: A password vault file, encrypted with a password, and stored locally on my computer (potentially synced to other computers through some syncing service).

As with any good crypto, the first line of defense is always preventing an attacker from gaining access to the encrypted data in the first place. If data is going over a network, encryption should be part of that as well, and that brings in a whole new host of issues that are outside of the scope of this article. Since I’m planning on storing the data locally on my hard drive, the first line of defense is me and my computer: Lock my computer when I walk away. Make sure there’s no spyware or malware on my computer, no keyloggers or other devices that could steal my password, etc… Going back to my original point about crypto being hard, it’s hard not only because writing the code is hard, it’s difficult because of all the ancillary ways someone could potentially get access to my data that I have to account for.

If I leave my computer unlocked, someone could steal data off of it. Install a keylogger. If I leave this password app open someone could just go in and look at my passwords after I’ve decrypted them. If they have physical access someone could install a HARDWARE keylogger between my keyboard and computer. Pull out my hard drive and clone it. Hack into the company Wi-Fi network and remote access my computer. Or even STEAL my computer. Or TSA could confiscate it. And/or force / legally compel me to reveal my password. You get the idea. There is no such thing as absolute security or secrecy.

However, let’s assume for a moment that an attacker DOES somehow manage to get access to my vault file. (Maybe by getting access to my computer when the vault program is closed and the data encrypted, or a folder syncing program has a hiccup and spams a public link to it on the Internet) My password file is in the hands of someone actively trying to get at my data, and the only thing protecting the data is my password and good cryptography. So. How do I use good cryptographic practices to secure my data?

Since we want to use the latest and greatest, we pick AES (Advanced Encryption Standard, successor to Date Encryption Standard and Triple Data Encryption Standard or Triple DES), Rijndael 256 to be specific (It’s good enough for government work…). In the System.Security.Cryptography namespace, Microsoft has kindly supplied us with the RijndaelManaged class and a pre-provided implementation of the AES standard. I ALWAYS recommend using a pre provided class instead of attempting to roll your own. The ones from Microsoft have gone through EXTENSIVE testing, code, and security review to verify the correctness of the algorithms and code. Frankly, if you write your own, yours will probably have bugs. Bad ones. Bugs that might get you put on the front page of major news organizations for data security breaches if you do it wrong. Warning given, point made, moving on.

Now, there are a couple of things you’ll need in order to do symmetric encryption when we crack open the RijndaelManaged class and try to create an encryption transform:

  • A Key
  • An Initialization Vector or IV
  • Your Data

The Key

First, the key, which is represented as a byte array of some fixed size. The size of your key is important, the number of bits has to match what the algorithm supports, in the case of Rijndael, it supports key sizes of 128, 192, and 256. In our case, we’ll be using a key size of 256. The number of bytes we need is 256 divided by 8 (remember, the key size for the algorithm is specified in bits, but our code mainly works with bytes). So if I’m declaring a byte array for the key for AES 256, it would be new byte[32].

Derive your Key

Now wait, how the heck to I remember a 32 number combination? I can’t. You probably can’t, and unless you’re a brainiac with an eidetic memory I wouldn’t suggest trying with any reasonable hope of success. So, since I have now come to this sad conclusion about my memory, I need some way to take a password or key that I can memorize and turn it into a 32 byte key. Fortunately, there’s a way to do just that by way of a password based key derivation function called PBKDF2. It’s also a standard (RFC 2898, which is only important because the code uses the name of the spec for the class name and not PBKDF2 like I would expect… go figure). Essentially, PBKDF2 is a piece of code that hashes an arbitrary length of text into a pseudo random key. Now, we could use something like MD5 to do something similar, but PBKDF2is designed with cryptography in mind, and to be slow on purpose. Why does it need to be slow? Simply this, cryptography is a big huge lever that makes it easy to go one way, and hard to go another. MD5 is FAST to compute, and someone trying to break into your encrypted data wants to be able to test as many passwords as possible as fast as possible. PBKDF2is slow and more difficult to compute, and the harder it is, the longer it takes to generate a hash, which dramatically reduces the number of passwords a hacker trying to attack your vault can try per second. (There’s a really good article by Jeff Atwood here: http://www.codinghorror.com/blog/2012/04/speed-hashing.html on hashing algorithms in security, rainbow tables, speed hashing, why it’s important to have a slow hash for security, and so on. If you’re interested on reading more about it)

How do we create this hashed key with PBKDF2? The constructor takes 3 things, a string password, an iteration count, and a byte array for something called a salt (Not in that order). The Password is easy, it’s a string or byte array and it’s whatever you choose to use as your “uB3rAw3$omeP@assw0rd!“. iteration count is pretty self-explanatory too, it defaults to 1000, and the bigger the number, the more calculation rounds it does and the longer it takes to calculate the hash. But what about this salt thing? What is it, and why is it important? Do I even care? So here we go.

Salt your Vault

If you’ve heard about salt or salting passwords before, it was probably in reference to a website or service, usually some piece of code that you controlled that sat between your users (or a supposed attacker) and the hashed passwords in your database so that an attacker couldn’t just *make up* a password like zn3lzl that would hash to the same value as the users password like mycoolpassword2 and allow them to log in. If we assume that the attacker has access to your password vault, we can probably assume he has access to your program as well, and it won’t be very hard to figure out some static salt value you’ve stored in your code. So it’s useless right? In some ways yes, in some ways no. IF it’s JUST you, or the attacker is targeting JUST YOUR vault, then yes, it doesn’t matter. However, if for some reason your password vault app becomes popular, and an attacker gains access to, lets say 5000 different peoples vault files, it’s much much easier to check the same passwords across ALL the vaults at once if the salt values are the same. That is again assuming he is not targeting just one particular vault. If they’re all different, the time required to check a password across all the vaults goes up by a factor of 5000, making it less viable to quickly crack multiple vaults. My thought is to simply store the salt along with the vault, since if an attacker has access to your vault, he probably has access to wherever I’d store the salt anyways, and it at least requires him to compute a separate hash for each vault. So, I’ll generate and store a separate, cryptographically random generated salt with my vault file.

Something like this:

var salt = new byte[256];

using (var rnd = RandomNumberGenerator.Create())
{
     rnd.GetBytes(salt);
}

As an aside, realize that no amount of encryption can save your data from a bad password. Good hackers are GOING to use huge word lists and fantastic substitution / transposition / combination rules before they even attempt to use a brute force attack. ‘MyL1ttl3P0ny‘ is not a good password. ‘abc123‘ is arguably much worse; but then you’re probably not reading this if abc123 is something you’d use for a password.

Now that we have our password, the salt (loaded from our vault file) and the number of iterations, we can derive a key with the correct size:

var deriveBytes = new Rfc2898DeriveBytes(myPassword, mySalt, 10000);
var aes256Key = deriveBytes.GetBytes(32);

Tada! magic. We now have a byte key from the password that we supplied earlier. Ok, now that we have our key, we can see that when you try to create our encryption or decryption transform that it is requiring something called an initialization vector (IV). I know when I was going through all this that I was thinking “what the heck is an initialization vector?” Since I had to teach myself what it was and why it was important, I’m going to assume somebody also doesn’t know and brain dump what I’ve learned on you all as well.

Block Ciphers, Chaining, and Initialization Vectors

First, we have to understand a few things, how a block ciphers works, how cipher block chaining works, how the IV plays in, and why it’s all important. To begin with, block ciphers. Most computer ciphers these days are done using cipher blocks of a given bit size, usually in round multiples of 2, for example 128 bit or 16 bytes chunks (this is actually the size AES uses). Our clear text gets chunked up into these small, manageable blocks of data. Then encryption transform is then run multiple times on each block, and the resulting blocks are all concatenated together to form the final encrypted output.

Because encrypted blocks are computed independently, changes somewhere in the original unencrypted data might only change the encrypted data for a few blocks of the resulting encrypted output. Furthermore, if you happened to have the same 16 byte chunk repeated again such that it aligned with a different blocks, you will get the exact same block of encrypted output. On one hand, this could be useful if you want to send delta updates to an encrypted file, it also means that attacks against individual blocks are feasibly be easier or useful information could be obtained by comparing one version of the vault to another or even patterns might still exist even within the encrypted date. A really awesome example of why it’s important to apply some additional steps during the encryption process with block ciphers are these three images below: (Courtesy of Wikipedia):

Original | No Block Chaining | Proper Cipher Block Chaining

This is where CBC or cipher block chaining comes in. CBC fixes this problem by taking the previously encrypted block and performing a bitwise XOR between it and the raw bytes of the plain text of the next block to be encrypted. Effectively, this means that a one byte change in the first block will change EVERY OTHER block in the resulting ciphertext. However, there’s one last piece. If the first couple of blocks are NOT changed in any way, you can still leek some information about changes that were made by way of the first changed block. So if I have blocks A B C D E and block C gets changed, CBC will cause the resulting ciphertext for blocks D and E to change. It will NOT affect blocks A and B. So, at last, we finally get to the purpose of the initialization vector. The IV is essentially a completely random block to start off the cipher chain. Makes sense now doesn’t it? If I have a completely random block that I create and pass along, it guarantees that even if the plain text doesn’t change at all, the ciphertext will be completely different each time, assuming I generate a new IV each time I change the cipher text. In practice, encrypted data should be statistically indistinguishable from random noise.

Illustration of the process of a block cipher with CBC for Encryption:

Illustration of the process of a block cipher with CBC for Decryption:

Now that we know why we need an initialization vector, we also know, or can guess, what size it should be without having to look at the documentation (16 bytes, because that’s the block size for AES, and our IV is essentially a known random starting block). Since we should generate a new block and also store it with our vault, I’m going to declare our IV and initialize it the same way we initialized our salt:

var iv = new byte[16];

using (var rnd = RandomNumberGenerator.Create())
{
   rnd.GetBytes(mySalt);
   rnd.GetBytes(iv);
}

The CryptoStream

Now that we have all the pieces we can put it all together and encrypt our data to a MemoryStream (the memory stream could be anything, including a FileStream, but this is easier for demo and console output):

using(var ms = new MemoryStream())
{
   using (var cryptoStream = new CryptoStream(ms, transform.CreateEncryptor(aes256Key, iv), CryptoStreamMode.Write))
   {
      cryptoStream.Write(plainTextBytes, 0, plainTextBytes.Count());
      cryptoStream.FlushFinalBlock();
   }

   cipherTextBytes = ms.ToArray();
}

And now the code snippet for decrypting the cypherTextBytes:

using (var ms = new MemoryStream(cipherTextBytes))
{
   using (var cryptoStream = new CryptoStream(ms, transform.CreateDecryptor(aes256Key, iv), CryptoStreamMode.Read))
   {
      var decryptedBytes = new byte[cipherTextBytes.Length];
      var length = cryptoStream.Read(decryptedBytes, 0, decryptedBytes.Length);

      var decryptedText = Encoding.UTF8.GetString(decryptedBytes.Take(length).ToArray());
   }
}

Padding under the covers

Some final notes block cipher padding: You’ll notice that when I’m reading the stream I have the following line of code: decryptedBytes.Take(length).ToArray() When you’re using a block cipher like AES, just like the key has to be exactly a certain size, each block of initial data ALSO has to be a certain size. This means you ARE going to get some extra data tacked on the end of your stream that you weren’t anticipating. There’s a couple of ways this can be handled, RijndaelManaged has a couple of padding modes that it can use ranging from filling all the additional slots of data with zeros, or filling them with completely random data, but by default it uses PaddingMode.PKCS7 which fills each extra bytes needed to make the length of the data an even multiple of 16 bytes with the number that represents the number of padded bytes added to fill the empty space. If you have the exact amount of data to exactly to fill the number of blocks, the padding algorithm will add an extra block to ensure that the last byte in the last block that is read represents the amount of padding. Otherwise, depending on whatever data you’re storing, you could accidentally lose some of your data if it was misinterpreted as padding. The crypto stream is aware of the padding and will return the correct length of the original data on the last read call. I simplified the stream reading process for simplicity of demonstrating the use of the crypto stream and how it handles padding, it just reads everything and trims the result with decryptedBytes.Take(length).ToArray(). In real life, you should use or make a ‘real’ stream reader that reads data out of the stream in chunks and aggregates them together, or just serialize / deserialize your objects directly to and from the crypto stream.

Finally:

The Full Demo

using System;
using System.IO;
using System.Linq;
using System.Security.Cryptography;
using System.Text;

public class CryptoDemo
{
    public static void Main(string[] args)
    {
        TestEncryptionAndDecryption();
        
        Console.ReadLine();
    }

    public static void TestEncryptionAndDecryption()
    {
        const string myPassword = "uB3rAw3$omeP@assw0rd!";
        const string myData = "Lorem ipsum dolor sit amet, consectetur adipiscing elit. " +
                              "Morbi rutrum pulvinar purus, nec ornare neque cursus id. " +
                              "Nunc non tortor est. Morbi laoreet commodo tellus, et suscipit neque elementum eu. " +
                              "Sed velit lorem, ultricies id varius vitae, eleifend eget massa. " +
                              "Curabitur dignissim eleifend quam, sit amet interdum velit rutrum vel. " +
                              "Nulla nec enim tortor.";
        var plainTextBytes = Encoding.UTF8.GetBytes(myData);

        Console.WriteLine("Password ({0} bytes): ", Encoding.UTF8.GetBytes(myPassword).Length);
        Console.WriteLine(myPassword);
        Console.WriteLine();

        Console.WriteLine("Plain Text ({0} bytes): ", plainTextBytes.Length);
        Console.WriteLine(myData);
        Console.WriteLine();

        var mySalt = new byte[256];
        var iv = new byte[16];
        
        using (var rnd = RandomNumberGenerator.Create())
        {
            rnd.GetBytes(mySalt);
            rnd.GetBytes(iv);
        }

        var transform = new RijndaelManaged();

        Console.WriteLine("Salt ({0} bytes): ", mySalt.Length);
        Console.WriteLine(Convert.ToBase64String(mySalt));
        Console.WriteLine();
        Console.WriteLine("Initilization Vector ({0} bytes): ", iv.Length);
        Console.WriteLine(Convert.ToBase64String(iv));
        Console.WriteLine();

        // Derive the passkey from a hash of the password plus salt with the number of hashing rounds.
        var deriveBytes = new Rfc2898DeriveBytes(myPassword, mySalt, 10000);

        // This gives us a derived byte key from our password.
        var aes256Key = deriveBytes.GetBytes(32);

        Console.WriteLine("Derived Key ({0} bytes): ", aes256Key.Length);
        Console.WriteLine(Convert.ToBase64String(aes256Key));
        Console.WriteLine();
        
        byte[] cipherTextBytes;

        using (var ms = new MemoryStream())
        {
            using (var cryptoStream = new CryptoStream(ms, transform.CreateEncryptor(aes256Key, iv), CryptoStreamMode.Write))
            {
                cryptoStream.Write(plainTextBytes, 0, plainTextBytes.Count());
                cryptoStream.FlushFinalBlock();
            }

            cipherTextBytes = ms.ToArray();

            Console.WriteLine("Encrypted Text ({0} bytes): ", cipherTextBytes.Length);
            Console.WriteLine(Convert.ToBase64String(cipherTextBytes));
            Console.WriteLine();
        }

        // this resets the algorithm. Normally, if we have a seperate encrypt / decrypt method
        // we would create a new instance of 
        transform.Clear();

        using (var ms = new MemoryStream(cipherTextBytes))
        {
            using (var cryptoStream = new CryptoStream(ms, transform.CreateDecryptor(aes256Key, iv), CryptoStreamMode.Read))
            {
                var decryptedBytes = new byte[cipherTextBytes.Length];
                var length = cryptoStream.Read(decryptedBytes, 0, decryptedBytes.Length);

                var decryptedText = Encoding.UTF8.GetString(decryptedBytes.Take(length).ToArray());

                Console.WriteLine("Decrypted Text ({0} bytes): ", decryptedText.Length);
                Console.Write(decryptedText);
                Console.WriteLine();
            }
        }
    }
}

Sample Output

Password (21 bytes):
uB3rAw3$omeP@assw0rd!

Plain Text (355 bytes):
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi rutrum pulvinar p
urus, nec ornare neque cursus id. Nunc non tortor est. Morbi laoreet commodo tel
lus, et suscipit neque elementum eu. Sed velit lorem, ultricies id varius vitae,
 eleifend eget massa. Curabitur dignissim eleifend quam, sit amet interdum velit
 rutrum vel. Nulla nec enim tortor.

Salt (256 bytes):
EA0wFLH3/VvAjh72dGmzd9taoFkPFymo5jKjONW5sozvPaHSuNFJdRE46pG7oc8JrvFIxfsyoNl+FSD9
iSM+jkVLLCpUB5Dr+JczPh9ZQIeLcFdk9mSP1hlekQhZOJu+Hs4+AGvkQNtxWTjlHZsQBU/uiSoDBAaB
4UyVKpv2ziHUePvWA+A9+pQLQ9YPYNQYYo+thfPOVAZMPejkPkIaBjEZBIeKVfLTN2339FJwWfVCz28H
edurTTVmuhysreMnSGWSuhijYkdNloOY4dam7pAg2sJhPAJonZ8UiBQ0VmPjNJsWclidaR/JB9reLPXz
AVoZgpVbcIJ3ntXzmYyIpg==

Initilization Vector (16 bytes):
JDkX+NZLCjW1IXaV8qHmFA==

Derived Key (32 bytes):
JJN8Ms4s4On7sSHpDGm+TZkEbJPUhV2WJ2RxhWUpjus=

Encrypted Text (368 bytes):
7invqU61t5YP6R5ZzLo80jhEUBDAnu3MT/PoRgPc0plM+XyTH/vVNy9Vs6wwaamBPRhW0i6mWHrs2UxV
M65DuBFB3WLGyUfPEuJO2q37NWhWshkozMnY/fRM6reKQbVv8r5fLNPaDpf/JrJnRQfbK573yIBLOAN6
1xNUkXRH0xamUBMV1M4orQ1aa/6Z00ziHKTNKilsDJ9S5AwP5qMpYk9clnQd6UgSPEC+w1vv58Ca7Zkf
6KTXTnUGWIDK0mmJIU0/vJeeijPrcvgo1IJj2CCJfMLXhrmCCX4VZw5ahMZr+3d2YzYO6qfpoaMFIoJn
h52Qs0KzgYaNR1tUIHrqGJPAEiBabtW8NnmsnzTRdZtrGpTe+aZddztpXyqNhsqxsxmvxHNPKjWIxAAx
SWRyFQsWVs9uvCadS8dus3og5pU10HxfGL8WNGVtzy+hJ30bROJ73DyukxWtlX1kzeUc7lOUCh8kZKkG
KOArExGY2JY=

Decrypted Text (355 bytes):
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi rutrum pulvinar p
urus, nec ornare neque cursus id. Nunc non tortor est. Morbi laoreet commodo tel
lus, et suscipit neque elementum eu. Sed velit lorem, ultricies id varius vitae,
 eleifend eget massa. Curabitur dignissim eleifend quam, sit amet interdum velit
 rutrum vel. Nulla nec enim tortor.

That concludes my epic tour de AES in .NET. It’s like writing papers in college all over again. Hope somebody finds it useful.

Font Hybridization in HTML 5

Over this Christmas break, I was over at a friends house and got to sit down and tinker with my friends Mac. I’d forgotten how good fonts look, and I suddenly realized why so many sites are now using custom web fonts. Since I primarily use Windows at work and at home, I get annoyed by fonts that are difficult to read or that don’t render well on the screen. Somehow, I’ve never been 100% happy with font rendering on windows. It’s been getting better, but it’s still not as good as a Mac is. Maybe its the screen, maybe its the OS, maybe its the app, maybe its a combination of all of the above. Because I’ve been on this HTML kick, and I’m on windows, I’ve tended toward Cufon as my font replacement tool of choice when building sites since it’s the only one that produces a “more-reasonable” output on my machine.

But Cufon by it’s nature has several drawbacks. First, you can’t copy and paste text rendered with Cufon. With most of the newer browsers you can select it, but there’s not much more you can do beyond that. So, I usually limit Cufon usage to titles, headers, and the more “design-ery” aspects of a page. Second, because Cufon is image based, if someone zooms in on your text beyond 100% Cufon rendered text gets all fuzzy the same way it would if you zoomed in on an image. And finally, because Cufon renders with javascript on the client, there’s no way to cache the text. Javascript is fast, but when you’re rendering a large amount of text on a phone, there’s usually not a good way around a flash of unstyled content. And it happens each time you go to a new page because it can’t be cached.

Web fonts on the other hand, allow you to use fonts in a very similar manner as if you had them installed on your device with native rendering by the OS. You can select, copy, paste, and use it as you would any other piece of text. Web fonts are cacheable, so although you could get a flash of unstyled content when you first visit a page, subsequent visits should render immediately. The disadvantage however, is the OS. On windows, the rendering of fonts just… Sucks. So people don’t use it.

But what if there was a way to do a hybrid between Cufon and Web Fonts?

Besides the rendering issue, Web Fonts are the best option. They’re the most flexible and the most future proof. But they still suffer on Windows, some phones, and on older browsers that don’t support the new @font-face CSS syntax. So what if we did a hybrid? Use @font-face, then fall back on Cufon for older browsers, and for windows. The advantage is that on a new browser the@font-face‘s will be cached, used for the initial rendering of the page, and then cleaned up with Cufon later.

Using Modernizr and a little custom javascript to do user agent testing I put together a page to test out the hybrid font idea:

http://www.paulrohde.com/demo/cufon-hybrid/demo.html

It’s a rough draft. I have some screen shots below from both Mac and PC’s (click for full size versions).

Mac

PC

For simplicity, the javascript code to test and load Cufon and my custom cufon polyfill looks like this:

function rendersNiceFontface() {
	result = navigator.appVersion.indexOf("Win") != -1 
		|| navigator.appVersion.indexOf("Android") != -1;
	return result;
}

var supportsNiceFontface = !rendersNiceFontface();

Modernizr.load([
	{
		test : Modernizr.fontface && Modernizr.canvas && supportsNiceFontface,
		
		nope : [ 'cufon-yui.js', 'BebasNeueRegular_400.font.js', 'cufon-polyfill.js' ]
	}
])

Using the Modernizr yepnope.js, I’m able to completely skip loading Cufon at all if the browser supports good @font-face rules. There’s more that I’d have to do to clean it up before I’d use it in a real setting, but it demonstrate the concept, and is something I could definitely use later as a @font-face polyfill. It does have some drawbacks though, you have to maintain both your CSS rules and your Cufon replacement calls, and Cufon doesn’t work well with a large amount of body text, so if you don’t support @font-face, I’d fall back to a good secondary font and forgo Cufon in those cases.

I hope this got some gears turning, I’m looking forward to some comments.

CSS Is A Lie.

According to Wikipedia:

Cascading Style Sheets (CSS) is a style sheet language used to describe the presentation semantics (the look and formatting) of a document written in a markup language.

Furthermore:

CSS specifies a priority scheme to determine which style rules apply if more than one rule matches against a particular element. In this so-called cascade, priorities or weights are calculated and assigned to rules, so that the results are predictable.

From my informal office survey, the general consensus is that the most important thing™ in CSS is that everything ‘cascades’ correctly. That is, that rules and styles defined further down in a CSS document override styles specified further up in the document (excluding the !important operator of course, which you should not be using anyways). It makes sense if you think about it, after all it’s not called a Cascading Style Sheet for nothing.

Now, the test. Given the following HTML and CSS snippet, what color will the two paragraphs be?

HTML

<p id="myid" class="myclass">
   Hello World.
</p>

<p class="myclass">
   This is another line of text
</p>

CSS

#myid {
   color: red;
}

.myclass {
   color: blue;
}

p {
   color: green;
}

The Answer:
Hello World is red, the second line of text is blue.

Don’t believe me? I put up a demo page with just the html and css here.

After a decent amount of Googling and reading blogs and the W3C Spec on CSS Selectors, I realize now how styles are calculated and applied, but seems to go completely against the ‘cascading’ nature of style sheets since the cascading nature of style sheets only applies when the ‘specificity’ values are the same.

For example, the following snippet behaves as you would expect:

CSS

p {
   color: gray;
   font-size: 20px;
}

/* ... later ... */

p {
   color: green;
}

As you might expect the paragraph will be green, and have a font size of 20px. The rule that is more specific selectors takes precedence over rules that are less specific regardless of where that rule is declared in the markup. You can mentally calculate how specific a rule by the ‘level’ and number of the selectors used. From low to high, you have

  1. Element and pseudo elements such as p, a, span, div, :first-line
  2. Class, attributes, pseudo-classes .myclass
  3. Id attributes #myelement #theonething
  4. From inline style attributes
  5. !important

If there’s anything of a higher level, the higher level overrides the lower level style, and if they’re the same level, the higher count wins, and if they’re at the same level with the same count (.myclass, and .myotherclass) then the one further down takes precedence, this is the “only” time cascading actually happens.

Its something that is unfortunately very subtle because of the way we write CSS, your taught from the begin to start with the most basic generic styles and work your way through the more specific styles. While this is correct, it’s very easy to go for a long time without running into a situation where a more specific styles.

I had no idea up until about a week ago that CSS worked like this. I always assumed that specificity was only used to select the subset of elements the rule applied to, and that if you applied a more general rule after a more specific rule that the more general rule would overwrite anything earlier. This is obviously not the case. If you want to read more, here’s some links to a few more comprehensive articles on the subject:

CSS Specificity and Inheritance: http://coding.smashingmagazine.com/2010/04/07/css-specificity-and-inheritance/

Star Wars and CSS Specificity: http://www.stuffandnonsense.co.uk/archives/css_specificity_wars.html

Specifics on CSS Specificity: http://css-tricks.com/855-specifics-on-css-specificity/

Art and Zen of CSS: http://donttrustthisguy.com/2010/03/07/the-art-and-zen-of-writing-css/

EventHandler<T> or Action<T>

If you’ve used C# for any length of time, you’ve used events. Most likely, you wrote something like this:

public class MyCoolCSharpClass {
     public event EventHandler MyCoolEvent;
}

public class MyOtherClass {
     public void MyOtherMethod(MyCoolCSharpClass obj)
     {
          obj.MyCoolEvent += WhenTheEventFires;
     }

     private void WhenTheEventFires(object sender, EventArgs args)
     {
          Console.WriteLine("Hello World!");
     }
}

Later, you need parameters to be passed in along with the event, so you changed it to something like this:

public event EventHandler<MyEventArgs> MyCoolEvent;

public class MyEventArgs : EventArgs 
{
     public string Name { get; set; }
     public DateTime WhenSomethingHappened { get; set; }
}
...
     private void WhenTheEventFires(object sender, MyEventArgs args)
     {
          var theCoolCSharpSendingClass = (MyCoolCSharpClass)sender;
          Console.WriteLine("Hello World! Good to meet you " + args.Name);
     }

You add two or three more events, some property change and changing events, and finally a class with about 4 properties, 3 events, and a little bit of code now has 3 supporting EventArgs classes, casts for every time you need the sender class instance (In this example, I’m assuming the event is always fired by MyCoolCSharpClass, and not through a method from a 3rd class). There’s a lot of code there to maintain even for just a simple class with some very simple functionality.

Lets look at this for a minute. First, EventHandler and EventHandler<T> are simply delegates, nothing more nothing less (If you’re not sure what a delegate is, don’t sweat it, it’s not really the point of this discussion). What makes the magic happen for events is that little event keyword the prefaces the event that turns that internally turns the delegate type into a subscribe-able field. Essentially, it simplifies adding and removing multiple methods that are all called when the event is invoked. With the introduction of generics in C# 2.0, and the introduction of LINQ in 3.5, we have generic forms of most of the delegates we could ever use in the form of Action<T1, T2, T3...> and Func<TRes, T1, T2...>. What this means, is that we can change an event declarations to use whatever delegate we want. Something like this is perfectly valid:

public event Action<MyCoolCSHarpClass, string, DateTime> MyCoolEvent;

And what about when we subscribe? Well, now we get typed parameters:

...
     private void WhenTheEventFires(MyCoolCSHarpClass sender, string name, DateTime theDate)
     {
          Console.WriteLine("Hello World! Good to meet you " + name);
     }

That’s cool. I’ve now reduced the amount of code I have to maintain from 4 classes to 1 and I don’t have to cast my sender. As a matter of fact, I don’t even have to pass a sender. How often have you written an event that’s something like this:

public event EventHandler TheTableWasUpdatedGoCheckIt;

Whoever is subscribed to this event doesn’t care about who sent it, or what data specifically was updated, all the subscribe cares about was that it was fired, nothing more than that. Even then, in a “you can only use EventHandler delegate world” you’re still stuck creating a method to subscribe to the event that looks like this:

     private void WhenTheTableWasUpdated(object sender, EventArgs args)
     {
          // Go check the database and update stuff...
     }

If we use what we’ve learned and change the event to something like this:

public event Action TheTableWasUpdatedGoCheckIt;

We can write our method like this:

     private void WhenTheTableWasUpdated()
     {
          // Go check the database and update stuff...
     }

Since we never cared about the parameters in the first place.

Thats awesome fine and dandy, but just blindly replacing every instance of EventHandler delegates to Actions isn’t always the best idea, there are a few caveats:

First, there are some practical physical limitations of using Action<T1, T2, T2... > vs using a derived class of EventArgs, three main ones that I can think of:

  • If you change the number or types of parameters, every method that subscribes to that event will have to be changed to conform to the new signature. If this is a public facing event that 3rd party assemblies will be using, and there is any possibility that the number or type of arguments would change, its a very good reason to use a custom class that can later be inherited from to provide more parameters. Remember, you can still use an Action<MyCustomClass>, but deriving from EventArgs is still the Way Things Are Done
  • Using Action<T1, T2, T2... > will prevent you from passing feedback BACK to the calling method unless you have a some kind of object (with a Handled property for instance) that is passed along with the Action, and if you’re going to make a class with a handled property, making it derive from EventArgs is completely reasonable.
  • You don’t get named parameters by using Action<T1, T2 etc...> so if you’re passing 3 bool‘s, an int, two string‘s, and a DateTime, you won’t immediately know what the meaning of those values. Passing a custom args class provides meaning to those parameters.

Secondly, consistency implications. If you have a large system you’re already working with, it’s nearly always better to follow the way the rest of the system is designed unless you have an very good reason not too. If you have publicly facing events that need to be maintained, the ability to substitute derived classes for args might be important.

Finally, real life practice, I personally find that I tend to create a lot of one off events for things like property changes that I need to interact with (Particularly when doing MVVM with view models that interact with each other) or where the event has a single parameter. Most of the time these events take on the form of public event Action<[classtype], bool> [PropertyName]Changed; or public event Action SomethingHappened;. In these cases, there are two benefits that you might be able to guess from what you’ve already seen.

  • I get a type for the issuing class. If MyClass declares and is the only class firing the event, I get an explicit instance of MyClass to work with in the event handler.
  • For simple events such as property change events, the meaning of the parameters is obvious and stated in the name of the event handler and I don’t have to create a myriad of classes for these kinds of events.

Food for thought. If you have any comments, feel free to leave them in the comment section below.

Line counting in powershell

Quick Tip: If you want to do a line count on a project, a really easy way to do it is with a simple Powershell command:

(dir -include *.cs,*.xaml -recurse | select-string .).Count

Add extension types as necessary. Note that it DOES include comments in the line count.

Here’s the stackoverflow article where this originated from, I needed something that would run outside of the main solution to take into account all the additional projects and files, so a plugin was not going to work.

http://stackoverflow.com/questions/1244729/how-do-you-count-the-lines-of-code-in-a-visual-studio-solution

Designing For Code: Copy & Paste

As a developer, I’m constantly on the web. Learning, reading, finding solutions to problems or looking up odd bugs. There is a great community of developers that are constantly publishing solutions / fixes / workarounds / techniques for the benefit of their fellow developers. Being developers, you would expect to find that the sites they run are easy to use across the board! For the most part they are but every now and again though I run across a site that makes me want to bash my head against the monitor. Why? Because it breaks the simplest feature:

Copy. Paste.

If you EVER post code on the web on a site you control, or if you are in charge of a site where you or others post code. Make. Sure. Copy. Paste. Works. Period. I’m not just talking about ‘I can select it and paste it’ functionality, I’m referring to those ‘code highlighting’ plug ins that add line numbers to EVERY SINGLE LINE of a code snippet when you paste.

Don’t Do It.

There are lots of good highlighting plug ins out there, sites that will format your code for you, and so on. Whatever method you chose, test it and make sure you can use the code from your site without having to modify or do anything to it. If you can’t, abandon it. Find something else. Use raw text. Whatever it takes to get the basics working well.

In addition, make sure your code works well when the site is scaled up or down. If you have a site that doesn’t change width as the browser size changes, ignore this, and read this post on fluid layouts: http://blogs.interknowlogy.com/2011/10/25/on-modern-layo…mi-fluid-grids/. Now IF your site changes width, your code should ALWAYS work on whatever size the visitor is looking at the site on. Check it on a smart phone. Check it on a tablet. Stretch your site across two wide screen monitors. If it ‘breaks’ (as in becomes unreadable or inaccessible) fix it. Go back to the basics. Remove / alter / don’t use a plugin. Again, do whatever it takes to make the basics work.

Don’t underestimate the importance of the small supposedly insignificant details. In all honesty, users aren’t going to come to your site and praise you for what an elegant job you did on making your code easy to copy and paste, or how nice the content looks, or how readable the article is. Its similar to the way people will notice spelling and grammar. If your sentences are well formed and correct and the article is well structured the person reading should never have to mentally think about it. On the other hand, if an article is not well crafted, that person will have to wade through the mistakes and mentally jump between what was actually written and what the author intended. In same way, whatever type of content you present to a visitor, it should always be presented it in such a way that the person consuming it can do so without ANYTHING getting in the way of them doing so effectively.

Think about it. Go check your site. Make the net a better place.

Invisible extra line height

I came across this annoying off by one pixel layout issue several days ago when dealing with <code> tags. You may not even guess that its caused by it at first, but if you ever try to test your layout with a baseline script, you may notice this.

Take a good look at the following screenshot and tell me, are these two paragraphs the same height?

The correct answer is No. They’re not. If you look at the line height, the font size, almost everything you’ll find that these two paragraphs are exactly the same with the exception of the <code> tags in the second paragraph. Digging in deeper you’ll see the following problems with the paragraph height:


Four. Pixels. Taller. *explicitive*. Most people wouldn’t care, or even notice unless you were using a baselineing tool like Typesetter. The solution, after much digging around, is to set the line height of <code> tags to zero.

code {
     line-height: 0;
}

And like magic, all of a sudden your paragraphs now behave correctly. This may be classified as a bug, at the very least, its an annoyance, but if you find yourself having problems with your paragraph line heights not behaving quite right, take a look and give this a shot.