# Wednesday, 09 September 2009

Steve Krug’s Don’t Make Me Think is a great book, not only for what it contains, but for what it does not contain.

At a couple hundred pages (most of which are filled with large graphics), Krug is forced to be concise in order to deliver his message. There is no room for irrelevant data in so little text. Happily for the reader, he succeeds brilliantly.

“Don’t Make Me Think” is not just the title of this book - It is the single most important point Krug makes about web usability design.

Throughout the book, he emphasizes that a good user interface should be self-evident. A user seeing a web page for the first time should not have to wonder what the page is for or how to use it.

He provides many examples to illustrate his points – most from actual web sites. Krug holds up Amazon.com as an example of a site that is doing many things right, making itself intuitive for the users. It’s tough to argue this point, given Amazon’s success and enormous growth over the years.

According to Krug, most web designers make the mistake of assuming that visitors to their site will read everything on each page presented to them. The reality is that most visitors quickly scan a page, searching for anything that looks relevant to them. When they find something that seems useful and clickable, they click it. When they actually find something useful, they stop looking.

Because of this behavior, web designers should focus on simplifying their page layout and draw the reader’s eye to the most important parts of the page that support the most common activities. They should provide clear, self-evident labels for the items on their pages: there should be no confusion what each item is for and what will happen if a user clicks on it or otherwise interacts with it.

But following his design advice is not sufficient. Krug also recommends recruiting and observing testers to use your web design. Watch how they interact with the pages; note the pages that they struggle to learn; document unexpected behavior.  A designer does not always think like an end user and users often react in unexpected ways. This type of testing is a good way to learn how end users perceive and interact with your site.

Check out this book if you want a quick way to improve the usability of your web sites.

Wednesday, 09 September 2009 22:20:12 (GMT Daylight Time, UTC+01:00)
# Tuesday, 08 September 2009

Episode 48

In this interview, Phil Japikse discusses his involvement with Hopemongers.org, a charity site focused on "micro-giving", allowing donors to give a small amount of money, directly to a charitable project.

Tuesday, 08 September 2009 06:11:52 (GMT Daylight Time, UTC+01:00)
# Saturday, 05 September 2009

Back To Basics

Extensions methods are a new feature of C# 3.0 and they are easier to use than they first appear.

An extension method is a method that is external to an existing class but appears as if it were a method on that class.

The rules for creating an extension method are simple.

  1. Create a static method
  2. The first parameter of the static method should be the type of the class you wish to extend
  3. Precede the parameter type of this first parameter with the "this" keyword.
  4. Call the method as if it were a method of the class. Omit the first parameter.

An example should clarify this. Assume we have a class Customer with properties FirstName and LastName as shown below

    public class Customer
        public string FirstName { get; set; }
        public string LastName { get; set; }

We can create a new static class MyExtensions with a static method GetFullName that returns the formatted first and last name of the customer. We do so with the following code

   public static class MyExtensions
        public static string GetFullName(this Customer cust)
            string custName = cust.FirstName + " " + cust.LastName;
            return custName.Trim();

Notice the parameter with the "this" keyword. That parameter format tells the compiler that this is an extension method and that it should extend the Customer class. As long as MyExtensions is in the same namespace or in a namespace available to our code (via the "using" statement), we can call this new extension method with the following code

Customer cust 
    = new Customer 
         { FirstName = "David", 
           LastName = "Giard" 
string fName = cust.GetFullName();

The code above outputs:

   David Giard

As you can see in the above code, it looks as if the GetFullName method is part of the Customer class.

We can add parameters to our extension methods as we would to any other method. The first parameter (with the “this” keyword) is always used to specify the class we are extending. All other parameters act just like normal parameters. The following extension method accepts a parameter “salutation”.

public static string GetGreeting(this Customer cust, string salutation)
    string custName = cust.FirstName + " " + cust.LastName;
    custName = custName.Trim();
    return salutation + “ “ + custName + ":"; 

Although the extension method has two parameters, we only need to pass the second parameter when calling it, as shown

Customer cust = new Customer { FirstName = "David", LastName = "Giard" };
string greeting = cust.GetGreeting("Dear");

The code above outputs:

   Dear David Giard:

In our examples, we were adding extension methods to a class that we just created. Of course, in this case, it would have been simpler to just modify the original class.  But extension methods are more useful if you are working with someone else’s class and modifying the source code is not an option. Extension methods often offer a simpler solution than inheriting from an existing class.

The real power of extension methods comes from the fact that you can even add methods to sealed classes. It is difficult to add functionality to a sealed class because we cannot inherit from it. Change the Customer class to sealed and re-run the code to prove that it still works.

public sealed class Customer

Here is the all code in the above sample

using System;

namespace TestExtensionMethods
    class Program
        static void Main(string[] args)
            Customer cust = new Customer { FirstName = "David", LastName = "Giard" };

            string fn = cust.GetFullName();

            string greeting = cust.GetGreeting("Dear");



    public sealed class Customer
        public string FirstName { get; set; }
        public string LastName { get; set; }

    public static class MyExtensions
        public static string GetFullName(this Customer cust)
            string n = cust.FirstName + " " + cust.LastName;
            return n.Trim();

        public static string GetGreeting(this Customer cust, string salutation)
            string custName = cust.FirstName + " " + cust.LastName;
            custName = custName.Trim();
            return salutation + " " + custName + ":"; 


You can download the sample code at TestExtensionMethods.zip (24.26 KB)


Saturday, 05 September 2009 02:52:43 (GMT Daylight Time, UTC+01:00)
# Thursday, 03 September 2009

Recently, I was asked to automate the process of checking a set of known URLs and determining if each URL corresponded to a “live” site. For our purposes, a site is live if I can PING it and get a reply back.

I can open a command prompt and use the PING command and read the response to determine if a site is live. A live site would return a series of messages starting with “Reply from”, while a non-existent site would report an error.

Unfortunately it is difficult to automate this task from the command prompt. Fortunately, the .Net framework provides the tools to allow me to ping a URL with just a few lines of code. The functionality I need is in the System.Net.NetworkInformation namespace.

I have created a public class  PingUtils and added the statement

using System.Net.NetworkInformation;

at the top of this class.

Next, I added the following method to attempt to ping a URL and return true, if successful.

public bool UrlIsLive(string url, int timeOut)
    bool pingSuccess = false;
    Ping ping = new Ping();
    string pingData = "TEST";
    byte[] pingDataBytes = Encoding.ASCII.GetBytes(pingData);
        PingReply reply = ping.Send(url, timeOut, pingDataBytes);
        if (reply.Status == IPStatus.Success)
            pingSuccess = true;
        pingSuccess = false;    
    return pingSuccess;

That’s it. If an error occurs when I try to ping, it is most likely a PingException, which is equivalent to the "Ping request could not find host" error reported at the command prompt.

This function returns true for a URL that exists and is live; and false for one that does not exist.

The following unit tests should deomonstrate this

/// <summary>
///A positive test for IsLive
public void IsLive_PingGoodUrl_ShouldReturnTrue()
    PingUtils pu = new PingUtils();
    string url = @"DavidGiard.com";
    int timeOut = 1000;
    bool siteIsLive = pu.UrlIsLive(url, timeOut);
    Assert.IsTrue(siteIsLive, "PingUtils.IsLive did not return true as expected");

/// <summary>
///A negative test for IsLive
public void IsLive_PingBadUrl_ShouldReturnFalse()
    PingUtils pu = new PingUtils();
    string url = @"notDavidGiard.com";
    int timeOut = 1000;
    bool siteIsLive = pu.UrlIsLive(url, timeOut);
    Assert.IsFalse (siteIsLive, "PingUtils.IsLive did not return false as expected");

It’s worth pointing out a couple limitations of this function.

  • Some site’s reject all PING request as a way to protect themselves against Denial of Service attacks. For example, if you PING Microsoft.com, it will not Reply, even though the site does exist.
  • As with any program that uses networking, the internal firewall rules where the program runs may affect the success of the program.
  • The PING command checks for valid URLs, even if the URL returns an error page. So, foo.DavidGiard will reply to a PING request because my hosting provider redirects this to an error page.

Even given those limitations, this can be a very useful function for testing if all the Links stored in your database are still relevant.

You can download the code here.

Thursday, 03 September 2009 16:23:51 (GMT Daylight Time, UTC+01:00)
# Wednesday, 02 September 2009

Episode 47

Leon Gersing is a tall, heavy concoction of rubber with a surprising lightness of gait, especially in a dance.

He is also a Ruby developer and he loves it. In this interview, he shares why he prefers Ruby over Visual Basic and C# as a language to build his applications.

17 mins, 15 secs

Wednesday, 02 September 2009 13:01:55 (GMT Daylight Time, UTC+01:00)
# Tuesday, 01 September 2009

Just because I've resigned myself to the inevitability that I will never know everything about every technology, this does not excuse me from having to know a little about everything. As a consultant, I need to be aware of what is going on in the industry: I need speak intelligently about different products and I often have to make educated choices about available technologies.

This is why I'm an avid listener of tech podcast. The recent explosion of available podcasts has helped me stay aware of technology, and to do so in the limited time available to me.

Below is a list of the technology podcasts to which I currently subscribe. The list is presented in no particular order, but I recommend each one. If I don’t like a podcast, I've stopped listening to it and it doesn't appear in this list.

.Net Rocks

This is the first podcast that I started listening to and it remains one of my favorites. Carl Franklin and Richard Campbell have been hosting this show for so long that they have access to almost anyone who does anything related to Microsoft development. The quality of their guests is top notch and I never miss an episode.


Scot Hanselman is probably my favorite interviewer among tech podcasters. When I do interviews for my show, my goal is to sound as much like Scot as possible. He draws out the guests because he either understands their topic in advance or he quickly grasps it. His humor is understated, which makes for an entertaining show and he has years of real-world experience, making his opinions relevant.

Deep Fried Bytes

I started listening to this show because I knew the hosts - Keith and Woody. I continue to listen because I like the content. This is a show that continually improves itself. It's been a couple months since their last episode, so I don't know if they are still committed to a regular schedule.

Polymorphic Podcast

There are two things I really like about Craig Shoemaker's podcast: The guests tend to be those who don't appear on other podcasts; and Craig keeps the show to a reasonable length. The show always remains fresh for me.

Run As Radio

This podcast is a little outside my comfort zone because it focuses on networking and other IT topics, rather than on programming and architecture. But Richard Campbell and Greg Hughes work to keep it accessible, so I always learn something.

Feel the Func

This one is relatively new and I started listening to it right from the start. Mike Neel leads the discussion and does most of the talking. I don’t think they are doing a lot of editing, which means that you always hear the good parts and bad parts of each conversation.  I really enjoyed a recent show in which Mike interviewed Brian Prince and Jennifer Marsman.


I had just about given up hope on this one. I loved the first 5 episodes that DevExpress evangelists Oliver Sturm and Gary Short put together. They combined interesting interviews with their own witty conversations. Then, after five episodes, SodThis disappeared into limbo. I assumed they had abandoned the project but they released Episode 6 yesterday.

Thirsty Developer

I just recently began to listen to The Thirsty Developer. It sound like many of the episodes were recorded in a busy Starbucks, which degrades the sound quality but creates a more relaxed atmosphere for the guests, so the conversation flows freely and enthusiastically. Larry Clarkin is the main guy, but he is sometimes assisted by Dave Bost.  They do a good job keeping conversations moving and engaging.


Technically I'm still subscribing to this podcast, but I don't know if there are any plans to resume it. I really liked the first few shows I heard, which focused on advanced .Net topics. But they haven't released an episode since April and the last two consisted largely of arguments about whether or not the Alt.Net community is still relevant. Maybe the silence answered that question.

Herding Code

I've been listening to Herding Code since its first episode. Originally this was very different from most podcasts because they rarely had guests. Instead, the four hosts exchanged ideas with one another on a preselected topic. This format worked because John Galloway, Scott Koon, K. Scott Allen and Kevin Dente are engaging and articulate enough to keep bringing fresh ideas on each topic. Recently, they have switched to inviting more guests to the show.

Stack Overflow

This is another podcast that seldom relies on guests to interview. Instead, Jeff Atwood and Joel Spolsky chat (or spar) about a variety of topics. Again, this is a format that only works if the hosts are very clever and these two guys definitely are. Because Atwood is still in the process of building the popular StackOverflow web site, much of the conversation revolves around the challenges he faces.

So these are the podcasts that I listen to regularly. I can keep up because I have a long commute and I enjoy my IPod at the gym and while doing housework.  Having said that, I’d love to hear about high-quality technical podcasts that you can recommend.

Tuesday, 01 September 2009 13:13:34 (GMT Daylight Time, UTC+01:00)
# Monday, 31 August 2009

Back To Basics

This series of articles provides an introduction to relational databases in general and Microsoft SQL Server in particular.

Part 1: What is a Database?

Part 2: Relationships

Part 3: Selecting data from a table

Part 4: Aggregating data

Part 5: Joins

Part 6: Inserts, Updates and Deletes

Monday, 31 August 2009 13:04:07 (GMT Daylight Time, UTC+01:00)

Episode 46

In this interview, Craig Berntson describes continuous integration and how he uses it to increase productivity on his projects.

16 mins, 10 secs

Monday, 31 August 2009 12:45:00 (GMT Daylight Time, UTC+01:00)

Live Mesh offers a folder synchronization feature that allows you to designate a folder on your computer and replicate all changes to files in that folder to a matching folder on one or more other computers. These other computers could be
computers that you own (so that you can synchronize documents or project files between a home and a work PC);
your friend's computer (allowing multiple users to update the same documents and keep make sure everyone has the latest changes);
or a virtual computer in the cloud (so that your documents are backed up and accessible anywhere you have an Internet connection).

Here's how it works. To get started, go to http://mesh.com and sign in with your Windows Live ID. (If you don't yet have a Live ID, you can get one at https://signup.live.com). Read the license agreement and click "I Agree" to proceed.

The first page displayed is the Mesh Devices page which shows all the devices on which you wish to manage through Live Mesh. The Devices page allows you to add computers and other devices to your Mesh so that you can view and manage them.
You will see an icon representing each computer currently included in your mesh.  One of these icons is the "Live Desktop". This is a virtual device that exists "in the cloud", meaning it is available from anywhere as long as you have Internet access. If the bottom arrow is not pointing to the Live Desktop, click the Live Desktop icon once to rotate it to the bottom. Then click Connect to view this device. You will see a screen that looks similar to the desktop of a computer running Windows XP, Vista or 7. 

Live Mesh

Although this Live Desktop computer has no start menu, it does have by default a couple empty folders - "Home Documents" and "Work Documents". These folders exist to hold documents that you can synchronize from your own physical computer.

Live Desktop

To be able to synchronize documents on your physical computer, you must add your computer to your mesh.

To do so, you must be sitting at your computer. From the Devices page, click the "Add Device" icon. The icon will rotate to the bottom. From the dropdown at the bottom of the screen, select whether your computer is running a 32-bit or 64-bit version of Windows. Even though Windows 7 is not listed in this dropdown, I am running it on 64-bit Windows 7 and it seems to work fine. Click "Install" to begin installing the Mesh tools to your local computer and to add your computer to the mesh.

You will be prompted to give your computer a descriptive name and it will appear hereafter on the Devices page with that name. Step through the Installation wizard to install the Mesh client software on your computer.

Once the Mesh tools are installed on your computer, you can begin synchronizing folders.

You should see two shortcuts on your desktop for "Home Documents" and "Work Documents". Click these shortcuts to create these folders and have them synchronize to the folders of the same name on the Live Desktop. Any files that you drag into either of these folders will be automatically synchronized the folder of the same name on your Live Desktop. Any subsequent updates to files in those folders will show up in the file on your live desktop.

But you are not limited to only synchronizing these two folders. You can synchronize any folder on your computer.

To synchronize a folder, you need to add it to your mesh. Right-click the folder in Windows Explorer and select "Add folder to Live Mesh...".

Add Folder right-click menu

The "Add Folder" dialog displays.

Add Folder dialog

Here you can give the folder a more descriptive name if you like. Click "Show Synchronization Options" in the dialog. The dialog will expand to list every device in your mesh.

Add Folder Synchronization options

You can specify the devices that should synchronize with this folder and how synchronization occurs. By default, the folder is synchronized between your local computer and the Live Desktop and that synchronization occurs whenever a file is added, deleted or modified on either device. You can click the arrow next to a device to change the options. For example, you may wish to skip large files because those take a long time to synchronize and may use up your disk space quota.

It may also be useful to synchronize a folder with another physical computer. To do so, you must first add that other computer to you Mesh account (see above); then it will appear in the Synchronization Options dialog and you can specify exactly how you want to share the selected folder with the selected device.

When you are finished, the folder will turn blue to indicate it is synchronized using Live Mesh.

Live Mesh is a simple way to backup files, make data accessible from multiple locations, and collaborate with other users.

Monday, 31 August 2009 02:44:24 (GMT Daylight Time, UTC+01:00)
# Saturday, 29 August 2009

In Preview 6 of Microsoft's Managed Extensibility Framework (MEF), the framework changed the rules on matching multiple exports to a single import.

In previous versions of MEF, the attribute syntax was identical whether we were matching a single item or multiple items to an Import. Both scenarios used the [Import] attribute to tell MEF to find exports with a matching contract.

For example, if your application is using MEF to match a string variable, based on a string contract, you would use code similar to the following

string SomeString { get; set; }

This works if MEF finds exactly one matching export string, as in the following code.

string ThatExportedMefString
        return "This string was provided by an MEF contract.  It is from  an external assembly.";

If there is a chance MEF might find multiple Exports that satisfy the above contract, you would need (in previous versions) to modify the imported type, so that it implements IEnumerable, as in the following example

IEnumerable<string> SomeStringList { get; set; }

Beginning with MEF Preview 6, the rule for the attribute becomes more strict. If you are matching a number of items into an IEnumerable set of items on your import, you must replace the Import attribute with the ImportMany attribute. In the above example, the Import declaration becomes

IEnumerable<string> SomeStringList { get; set; }

The main advantage of this change is that ImportMany will not blow up if MEF finds no matching export for the contract. Import throws an exception if it cannot find a matching export.

Of course, your code will need to handle cases in which there are 0 matches, 1 match, or many matches when MEF seeks exports to match this contract. In the above example, that code might look like

foreach (string s in SomeStringList)

In my opinion, when you are writing an Import and you don't have control over the Export (for example, if you are allowing third-party vendors to supply the matching Exports), you should always use the ImportMany attribute. The only time you should use the Import attribute is if you are only looking for contract matches in assemblies that you have written and you can guarantee that there will always be exactly one match.

Saturday, 29 August 2009 20:41:51 (GMT Daylight Time, UTC+01:00)