Another Microsoft Certification acquired!

MS(rgb)

Ever since gaining my MCTS (Microsoft Certified Technology Specialist) and MCPD (Microsoft Certified Professional Developer) certificates at the end of 2011 and the early part of 2012 I’ve had an appetite to acquire more.  Life seemed to get in the way of this during 2012 so that was, unfortunately, a quiet year on the certification front.

 

Well, it’s now 2013, and Microsoft have recently revamped a lot of their certification offerings.  A new type of certification that they’ve introduced is that of a Microsoft Specialist.  The Microsoft Specialist certification seem to be a replacement for the old MCTS (Microsoft Technology Specialist) and is effectively a certificate awarded for showing competence in a specific piece of Microsoft Technology, of which there are quite a number.

 

During the latter part of 2012 and the early part of 2013, Microsoft were running a promotion to take a free exam.  This was exam 70-480 – Programming in HTML5, JavaScript & CSS3.  Successfully passing this exam would award the exam taker the certification of Microsoft Specialist – Programming in HTML5, JavaScript & CSS3.

 

Well, towards the end of February of this year, I sat and successfully passed the exam acquiring the certification of Microsoft Specialist – Programming in HTML5, JavaScript & CSS3.

 

This is one of three exams that, when all three are successfully passed, will gain the new style Microsoft Certified Solution Developer – Web Applications certificate.  I guess the rest of this year’s certification journey has just been mapped out!

Razor’s Conditional Attributes Bit Me!

When ASP.NET MVC 4 was released, Microsoft upgraded the Razor view engine that ships with ASP.NET MVC to version 2 and with it came a number of improvements. One of these improvements was a feature called “conditional attributes”.

 

Conditional Attributes are a new feature that allows you to shortcut “boilerplate” null check code when rendering an attribute to a HTML element. If you have a model property or a local variable that is used to output the “value” of a HTML element’s attribute that evaluates to NULL, the Razor engine will now automatically discard rendering the entire (empty) attribute.

 

Thus, whereas we’d previously have to do something like this:

<div @{ if(@Model.ClassName != null) { <text>class="@Model.ClassName"</text> } }>Content</div>

to ensure that, if @Model.ClassName was null, we wouldn’t render the entire class attribute, the new Conditional Attributes feature allows us to do this:

<div class="@Model.ClassName">Content</div>

and the Razor parser is smart enough to not render the class=”” literal attribute text if @Model.ClassName evaluates to null.  So we don’t get this:

<div class="">Content</div>

But instead we get much cleaner markup like this:

<div>Content</div>

 

Razor’s conditional attributes also work similarly with boolean values, so you can for example, cleverly output checked attributes on an input element defined as a checkbox like so:

<input type="checkbox" checked="@IsChecked">

If @IsChecked evaluates to true, the checked attribute is rendered with a value which is the same name as the attribute:

<input type="checkbox" checked="checked">

However, if @IsChecked evaluates to false, the entire attribute is not rendered.  Andrew Nurse, a developer on Microsoft’s Razor team, has a great blog post about this and the other new features in Razor v2.

 

So, this is all well and good, however, there is a huge gotcha that you need to be aware of surrounding conditional attributes!  I discovered this when upgrading a project originally built in ASP.NET MVC 3 (which had Razor v1 and thus didn’t have the conditional  attributes feature) to ASP.NET MVC 4.  This previously working project suddenly developed bugs that weren’t there before.  Upon inspection, it was due to Razor’s new conditional attributes feature that introduced a breaking change in my code.

 

Basically, I had a ASP.NET MVC strongly-typed View that displayed a grid of data.  As part of the model for this view was an object used to hold some basic data relating to how the user had configured the grid.  This was simply non-sensitive data such as the number of records per page, the column name upon which the grid was sorted and the sort order (ascending or descending).  This was output to the View as a number of hidden input fields within the view’s form, such that they could be posted back to the server upon each page request:

<input type="hidden" name="pagesize" value="@Model.PagingInfo.ItemsPerPage" />
<input type="hidden" name="sortname" value="@Model.PagingInfo.SortName" />
<input type="hidden" name="sortasc" value="@Model.PagingInfo.SortAscending" />

The problem was within that last line.  The @Model.PagingInfo.SortAscending is a boolean that evaluates to true if the user wants to sort ascending, or false if the user wants to sort descending.  In ASP.NET MVC 3, this would work just fine with the resulting output looking something like:

<input type="hidden" name="sortasc" value="false" />

when the user had elected to sort descending.  However, upon upgrading the project to ASP.NET MVC 4, Razor v2’s conditional attributes feature saw that the @Model.PagingInfo.SortAscending model property was a boolean and that it evaluated to false, and decided not to render the value attribute at all, thus my output became:

<input type="hidden" name="sortasc" />

When the user had selected to sort in an ascending manner, and the value of the @Model.PagingInfo.SortAscending property evaluated to true, the output was even more strange:

<input type="hidden" name="sortasc" value="value" />

This was the “cleverness” of the Razor parser kicking in and outputting the attribute’s name as it’s value when my boolean property evaluated to true.  This makes lots of sense when we’re outputting a series of checkboxes and we want one of them to be checked, which requires the checked=”checked” attribute to be added to the checked element, but not so much sense when we actually want to output the string “true” or “false” as a value attribute’s value in a hidden text input form field!

 

So whilst the output each time was valid HTML markup with no errors being displayed, this clearly affected the functionality of my page.  POSTs of this page, which would cause the page to redisplay (for example, when the user selected a different sort column or a different sort order) would always result in the grid sorting in a descending manner, irrespective of the user’s choice.

 

This was due to the Razor model binder finding no suitable value with which to bind the sortasc parameter of the controller method that was invoked when the page was posted back to the server:

public ViewResult List(string searchterm, string sortname, bool sortasc = false, int id = Page, int pagesize = PageSize)

Of course, the sortasc parameter’s default value was then always used, resulting in the “grid is always descending” behaviour!

 

This was an interested bug to hunt down within my code, and was also a particularly annoying one too, as the code had worked perfectly in ASP.NET MVC 3.  However, once it was discovered how and why this bug reared it’s head, it was also simple enough to fix.

 

The fix for this is to simply append .ToString() to all boolean variables/model properties that are used purely to render a true or a false attribute’s value on a HTML element.

 

Thus, my above code was fixed quite simply like so:

<input type="hidden" name="pagesize" value="@Model.PagingInfo.ItemsPerPage" />
<input type="hidden" name="sortname" value="@Model.PagingInfo.SortName" />
<input type="hidden" name="sortasc" value="@Model.PagingInfo.SortAscending.ToString()" />

The addition of the .ToString() forces the evaluation of the boolean and it’s resulting conversion to a string prior to the Razor engine’s parser being able to work it’s “conditional attribute” cleverness.  It simply results in the boolean’s string value being output as the attributes value every time, like this:

<input type="hidden" name="sortasc" value="True" />

So, whilst this issue didn’t manifest itself for me until I upgraded an older ASP.NET MVC 3 project to ASP.NET MVC 4, it’s quite feasible that a developer could write code like this from scratch in MVC 4 and expect the Razor parser to simply output the value of the boolean as the attribute value.  There’s an open case for this in the ASP.NET Web Stack issue tracker on CodePlex and whilst there is a simple enough workaround for the problem, it’s the “breaking change” nature of the issue that is most concerning.

 

Let’s be careful out there and remember to .ToString() our booleans!

DDD North 2012 Conference Write-up

This past weekend, on Saturday 13th October 2012, the 2nd Developer Developer Developer North conference was held at the University of Bradford.  I attended the conference, which was my first Developer Developer Developer (DDD) event ever, and it was a cracker!

DeveloperDeveloperDeveloper events are a series of conferences held around the UK and in some locations abroad focused primarily at Microsoft/.NET Developers.  The conferences are free to attend and are made possible by the support of a wonderful set of sponsors.

The University Of Bradford was a great venue for the conference.  It had plenty of space and rooms available to accommodate the DDDNorth event which had 5 parallel tracks of talks and 5 sessions throughout the entire day.  Each session was an hour in length with 15 minute breaks in-between the 2 morning sessions and the 3 afternoon sessions.  There was a catered lunch provided free of charge to attendees during the generous 1.5 hour lunch time.

As there were 5 parallel tracks of sessions, it was often a difficult choice to pick just one session to attend.  This was especially true for myself within the 2nd session time-slot, where I really wanted to attend all 5 parallel talks!  Unfortunately, I had to only pick one.

The first talk I attended was Garry Shutler's "10 Practices that make me the developer I am today".  This was a talk aimed at more "entry level" developers, but I thought I'd attend to see if there were a few nuggets of wisdom that I perhaps didn't know.

Garry told us that standards matter, although what they are, doesn't.  Using StyleCop to help enforce standards across your team can help to keep consistency and there's even integration into ReSharper to help with this.  Garry also tells us of the importance of Code Reviews, although much of their value comes when they're done at a "story" (as in, an Agile User-Story) level rather than at a more granular level.  We should learn constantly as no-one else cares about our own personal learning (for both our current and future jobs) and we shouldn't wait for our employers to do this for us.  We should learn new languages, especially ones that are significantly different from those that we use every day.  It's a big investment, but worth it as concepts and paradigms in one language can help us understand similar concepts in other languages.  To help with our learning, we should leverage experience of other developers around us.  It's only obvious once you know!  Testing and Automation are a huge help in getting a faster development feedback loop and allow us effectively "go faster" in our development by correcting our course more frequently.  Within our code, we should trust no-one by ensuring we always implement preconditions to prevent such annoyances as pesky "Null Reference Exceptions" when we were expecting an object to be passed to us, and we should also log excessively, not just errors and exceptions, but everything.  This is a big help when trying to debug issues on production environments where a real debugger can't be used.

The next talk of the day was Liam Westley's "Async C# 5.0 - Patterns for Real World Use" which was a great talk about the Asynchronous programming features introduced into C# 5.0 with the async and await keywords.  Liam's talk specifically focused on the WhenAny and WhenAll methods that perform functionality when working with sets of tasks, and used a concept of copying or downloading music files of varying formats to demonstrate the versatility of the various Async methods.

Liam tells us that we should return a Task instead of a void, where we would have previously done so.  This gives the caller information about what's happened (or happening) during our method.  He then goes on to tell us about the use of the WhenAll method for dealing with lists (i.e. List<Task<string>>) of tasks.  Further processing can happen when all tasks have completed, as all tasks are important here.  Next up, Liam tells us about the very useful and versatile WhenAny method which can be used in numerous ways.  One of which was maintaining a limited batch of a specific number of files when copying/downloading large numbers of files.  The WhenAny method detects completions, removes them from the batch and replaces them with a new file copy task, thereby effectively throttling the downloads to certain batch size.  WhenAny was also used to show it's usefulness in redundancy.  This can be used for competing services where the first to return wins. For example, downloading multiple versions/formats of the same music file, and automatically playing the one that's downloaded first. Other files will continue to download in the background, but can be cancelled if required, needing only a single Cancellation Token for all outstanding tasks.  This technique can also be used for early bailout when we want to cancel outstanding tasks based upon notifications from one or more completed tasks.  The final interesting usage of WhenAny was for interleaving. Liam's demo here showed music files and their associated md5 hashes being downloaded.  Once both the music file and the associated .md5 hash file had downloaded, a hash check could be computed on the file pair.  Very clever stuff indeed!  Liam ended his talk with a mention of a Microsoft white-paper that he recommended us all to download for further reading, "Task-Based Asynchronous Pattern" by Stephen Toub.  Oh, and a rickroll.

After a short break, the following talk was Gemma Cameron's "BDD - Look Ma! No frameworks".  Gemma promised an interactive session with this one, all geared around Behaviour-Driven Development (BDD) and design, and we weren't disappointed.  A very interesting talk that caused all the attendees of the talk to have a real good think about how we would document the behaviour of buying an apple!

Gemma started by asking us to think about why we test. Testing creates good code because it creates a good design for our code.  When we test first, we're forced into making our production code testable.  We should beware of retro-fitting tests to existing code that perhaps was not written using a test-first approach.  This has the potential to "bake-in" bad code by fitting a (passing) unit test around it!  Gemma goes on to say that BDD has often been called "TDD Done Right" and in many ways this is true.  BDD is about us as developers asking "Why?" rather than asking "How?". As developers, we're good at solving problems and thinking about how we might implement a solution, but it's very easy for us to lose sight of why a feature is implemented when viewed from a business requirement perspective.  BDD helps, and if done correctly, forces us to consider the why of a feature's business requirements as that is baked right into our BDD tests forcing us to informatively document those business requirements within our tests.  Adding such an expressiveness to our test code, and specifically an expressiveness that is written in a language understandable not just by developers but by business analysts, project managers and product owners as well will result in bringing developers closer to product owners and including all the other roles in between.

BDD isn't really about unit tests, it's about working from the top-down instead of from the bottom-up. This means we start with the abstract feature requirements (the why?) and gradually drill down into the specifics (the how?) of how we'll implement that within our code.  A popular starting point for organising our BDD tests is to use the GWT (Given When Then) syntax, for example: Given [initial context], when [event occurs], then [ensure some outcomes]. Gemma does point out that although GWT can be a useful starting point, it's often restrictive as it doesn't always lend itself to best expressing our requirements.

Gemma's session continued with all attendees attempting to put this into practice by writing a test that correctly and sufficiently expresses our requirements around purchasing an apple!  Our first collective attempt by the attendees, which followed a GWT syntax was something along the lines of:

public void BuyAnApple()
{
   GivenAppleCosts50();
   AndThatIHaveOneAppleInMyBasket();
   WhenICheckout();
   ThenTotalIs50();
}

Note our use of the GWT syntax for expressing our requirements.  After some further discussion and reflection, we eventually arrived at a much better test:

public void ShopKeeperSellsAnApple()
{
   AppleCosts50();
   CustomerHasAppleInTheirBasket();
   WhenICheckout();
   ThenTotalIs50();
}

Note that here we’ve dispensed with the formal GWT syntax, instead preferring a more natural language to express our requirements and actual real-world behaviour.

In reference to the title of her talk, Gemma was quite down on the use of frameworks to help with the process of writing BDD tests, preferring instead to "hand-code" all of the test syntax.  Frameworks can be helpful, but there's the potential to focus (or lean) too much on the tool or framework rather than getting the behaviour and requirements documented in the common language.  It's this language that we need to work on improving, as this will act as our product documentation both for ourselves as developers and for the business people.  It's this language that's invaluable in letting us know what and why we did something when we return to that code in 6 months time!

After this we had a very nice lunch with sandwiches, fruit and chocolate, all provided by the generous sponsors of the event.  There were a number of Grok Talks during lunch.  These are short 10-15 minute ad-hoc talks given by various attendees of the event.  Unfortunately, I didn't get to attend any of the Grok Talks as I was far too busy stuffing my face!  :)

The first session after lunch was Rob Ashton's "Javascript Sucks And It Doesn't Matter!".  Rob's talk was advertised as being controversial, and he didn't disappoint!  Pretty much straight out of the blocks Rob tells that he doesn't use semi-colons in his JavaScript code, something that he's sure would really annoy Douglas Crockford, the man who's trying to get us all to write more "correct" JavaScript.  Rob goes on to say it's specifically because it'd get up Douglas Crockford's nose that he doesn't use semi-colons!

Rob tells us that JavaScript is a "broken" language in that it's dynamic, allows much flexibility with it's syntax (witness Rob eschewing semi-colons), and will happily perform the strangest type-coercion in order to allow you to add apples to oranges.  Rob goes on to say that, despite JavaScript's fast-and-loose approach, this doesn't really matter as we've got some very nice tools at our disposal to help "keep JavaScript sane".  JSLint & JSHint are both very useful code-quality tools that allow us to have our JavaScript inspected for potential problems including unsafe comparisons, accidentally declaring global variables (rather than locally-scoped ones), un-strict code and stylistic correctness amongst other things.  We can even run these tools from the command line as part of a continuous integration process with the use of node.js, which is effectively JavaScript for the server.

Rob says that our JavaScript code should be tested just as much as we would test our C# code, and to this end, a tool called Zombie comes in very handy.  Zombie is a "headless browser" which is itself written in JavaScript and allows us to put our JavaScript code through it's paces from a command line driven, continuous testing process.  Rob says that we should try to avoid automating a real browser, as we would do with Selenium, as this is a much slower process.  Testing with Zombie is fast and provides that quick feedback loop that we get with a speedy continuous testing process allowing a more rapid develop/debug cycle.

Of course, it's not all about leaning on the tools to prevent bad JavaScript code from being written.  Rob highlights the fact that we need good discipline to write good code.  At this point, Rob also wanted to address the thorny issue of TypeScript.  TypeScript is a new language from Microsoft that is a superset of JavaScript.  It's advertised as "application-scale" JavaScript, compiles to native plain-old JavaScript, and attempts to add some static typing and better object-oriented support to the JavaScript language.  Rob suggests that TypeScript is largely "application-scale" Marketing rather than anything else and points out that much of what TypeScript gives us can be achieved without the need for a new "language".

We all know that JavaScript files and functions can be included within other JavaScript files, simply by referencing them, however, this quickly gets unwieldy and unmaintainable.  Moreover, the ordering of how JavaScript files are loaded is often very important.  This reminds me of my own struggles with developing in "Classic" ASP/VBScript many years ago and just how easy it can be to tie yourself up in knots when including one file within another (which usually, in turn, includes further files) and managing that complex chain of inclusion.  Tools such as RequireJS can help to bring order to this inclusion madness and attempt to make JavaScript more modular by making it easier to ensure the relevant functionality has been loaded and brought into scope before attempting to execute code that will rely upon it.  Other tools such as CommonJS offer similar functionality but also go further by offering additional functionality in the areas of Modules, Packages and Promises (which greatly simplify asynchronous programming).

A simple example of how RequireJS helps to achieve it's goal is seen here:

var otherFile = require("./otherFile.js");
otherFile.Dosomething();

A similar concept to this is available within TypeScript, too:

import otherFile = module(“otherFile”);

This simplification and improvement of managing required/included files is slated to be included within ECMAScript Version 6, currently code-named "Harmony".  ECMAScript is the standardized version of the JavaScript language, ratified by ECMA (European Computer Manufacturers Association), however, as to when Version 6 will be finally released is anyone's guess!

The last session of the day was Ian Cooper’s “Event-Driven Architecture”.  This was a fairly heavyweight talk from Ian that gave us a deep-dive into the concepts and best-practices around architecting an application in an event-driven (or service oriented) approach.

Ian started by reminding us of the 4 tenets of service orientation:

  • Boundaries are explicit
  • Services are autonomous
  • Services share schema and contract, not class
  • Service compatibility is determined based on policy

Event-driven, or service oriented architecture is a set of design principles that, much like object-orientation, help us to architect an application that is composed of individual services.  Services are autonomous “mini-applications” that perform some discreet function.  The entire application is composed of many of these services that will talk to each other via message passing.  There are explicit boundaries between these services and each service passes all of the data required to the next service in the chain in order for that service to perform its function.  Services are effectively “black-boxes” that share nothing of their internal workings or state, and assert their requirements, constraints and capabilities via a public schema or contract.

Ian goes on to discuss the various types of inter-service communication that is available with Service Oriented Architecture.   The simplest type is “Request-Reply”.  This is something we’re all familiar with as it’s exactly how the world-wide web works.  We (the client) request something.  We wait whilst the server composes and sends back to us it’s reply.  A slightly better approach that avoid the necessity for the client to “wait” for the response is something known as “Request-Reaction”.  Here, the requestor (client) no longer has to wait for the data to be sent back.  After an initial acknowledgement of the request by the server, the client can receive the data a later point in time.  Here, the requestor usually “polls” for the result, but can be informed when the result is available (i.e. similar to a callback function).

The next type of communication is “Inversion of communication”.  This allows a main service to push all of it’s events or messages to an external central message queue.  Consumers who are interested in those events can then simply “listen” to that queue.  This helps to reduce the need for systems or services to “know” about each other, thereby reducing coupling between autonomous services even further.  This is helpful as part of the overall architecture as the more one system needs to “know” about another one, the harder it is to integrate those systems into a cohesive whole.  In this “event publishing” scenario, the publisher of the event doesn’t need to know anything about the consumers of those events!  Ian used an example of a hotel’s internal systems with a main “reservation” system publishing a “Reservation Made!” event or message to the external message queue, allowing a “Room Cleaning” service to subsequently view that message and arrange for room service personnel to clean the room pending the hotel guest’s arrival, by watching the central message queue for “Reservation Made!” events.

Ian continued by talking by Messages and Events.  What is an event?  Well, an event is simply a message.  Messages are the data that passes into and out of our services, in a format or schema defined by the service, and allows the service to perform functions based upon that data.  Messages can be either thin or fat (see below for further information) and are usually communicated along a non-durable channel (which essentially means the messages are not persisted to disk) for high throughput.  Messages are passed along Channels and Queues.  Channels allow the passing of messages between services and usually operate in a real-time manner.  Some channels can act as Queues, and queues will often persist messages (thus acting as a durable channel) allowing long-term storage and delaying of messages.  A central message queue would most likely take this approach to it’s handling of messages.  Crucially, channels should only operate on one type of message, using separate channels when different messages types need to be passed around.

Often, channels and queues will operate in conjunction with related services such as a routing service or a transformation service.  Routing services will ensure messages are routed to the correct destination – often determined by a business process or workflow (see orchestration details below), whilst transformation services will ensure messages can be converted (transformed) from one type to another.  This can often involve adding additional data to the message, removing extraneous data from the message that’s no longer required, or it could simply mean changing the schema of the message from one schema to another.

Ian then talked about the concept of Reference Data which is the data used by the services themselves.  This data can be both private data, which is used by the service itself internally, or it can be public data, which is the data that services will pass around within their messages.  Often the public data can be delivered in two different ways.  Ian talked about thin messages and fat messages.  A thin message doesn’t include all of the data that subsequent services may require from the originating service in order to do their jobs.  They are given a small amount of data (keeping the message “thin”) and told where they can go to retrieve further data that they may require.  This usually involves going back to the originating service with a request for that additional data.  On the other hand, messages can be “fat” and include all of the possible data that subsequent services may require in order for them to perform their functions.  They may even have more data available to them than they need.  There are pros and cons to both messages types.  Thin messages create a need to services to perform extra communication in order to retrieve additional data and this in turn requires services to “know more” about each other.  Fat messages avoid this extra communication but create bigger messages and may introduce security issues by exposing so much data, publically, that may not be required by the downstream services that will operate on the messages.

Ian continued by talking about Sagas.  Sagas are like long-running transactions that pass through multiple services.  Ian was keen to point out that actual transactions should never cross service boundaries. This would defeat the concept of services being autonomous and having explicit boundaries.  It’s quite possible, though, that business processes and workflows  will indeed be composed of several discreet steps performed in a specific order, where each of these steps has its functionality provided by a separate service.  The alternative approach to a service-spanning transaction is to raise additional messages (such as a “Reservation Failure” event) that interested consumers must listen for an respond to appropriately.

When many different services are required as part of a long-running process, we utilise an orchestration service to manage the “flow” of the messages.  Orchestration will ensure that the correct events or messages are passed to the appropriate service at the appropriate time, and helps to define the steps of a business process.  An Orchestration service effectively know all about the various services involved in a long-running “story” so that they (the services themselves) don’t have to!

Ian had warned us at the beginning that there was a lot of information to cover in his talk, and at this point we had, unfortunately, run out of time.  It was time for all of the attendees to gather in the main hall for the grand prize draws.  We’d each been given a raffle ticket earlier in the day, and now was the time that we would see if we’d won one of the many, many prizes being given away at the end of the day.  Unfortunately, I didn’t win a thing – although the raffle tickets with numbers either side of mine were called out!  This didn’t matter, though, as I’d had a brilliant day at a very well run event and listened to some impressive speakers talking about incredibly interesting subjects.

Overall, I really enjoyed my first DDD event and I can’t wait until next year to be able to attend DDD North (and hopefully some of the other DDD events around the UK) again!