DDD North 2017 In Review

IMG_20171014_085217On Saturday, 14th October 2017, the 7th annual DDD North event took place.  This time taking place in the University of Bradford.

IMG_20171015_171513One nice element of the DDD North conferences (as opposed to the various other DDD conferences around the UK) is that I'm able to drive to the conference on the morning of the event and drive home again after the event has finished.  This time, the journey was merely 1 hour 20 minutes by car, so I didn't have to get up too early in order to make the journey.  On the Saturday morning, after having a quick cup of coffee and some toast at home, I headed off towards Bradford for the DDD North event.

IMG_20171014_085900After arriving at the venue and parking my car in one of the ample free car parks available, I headed to the Richmond Building reception area where the conference attendees were gathering.  After registering my attendance and collecting my conference badge, I headed into the main foyer area to grab some coffee and breakfast.  The catering has always been particularly good at the DDD North conferences and this time round was no exception.  Being a vegetarian nowadays, I can no longer avail myself of a sausage or bacon roll, both of which were available, however on this occasion there was also veggie sausage breakfast rolls available too.   A very nice and thoughtful touch!  And delicious, too!

After a some lovely breakfast and a couple of cups of coffee, it was soon time to head off the the first session of the day.  This one was to be Colin Mackay's User Story Mapping For Beginners.

IMG_20171014_093309Colin informs us that his talk will be very hands-on, and so he hands out some sticky notes and markers to some of the session attendees, but unfortunately, runs out of stock of them before being able to supply everyone.

Colin tells us about story mapping and shares a quote from Martin Fowler:

"Story mapping is a technique that provides the big picture that a pile of stories so often misses"

Story mapping is essentially a way of arranging our user stories, written out on sticky notes, into meaningful "groups" of stories, tasks, and sections of application or business functionality.  Colin tells us that it's a very helpful technique for driving out a "ubiquitous language" and shares an example of how he was able to understand a sales person's usage of the phrase "closing off a customer" to mean closing a sale, rather than the assuming it to mean that customer no longer had a relationship with the supplier.  Colin also states that a document cannot always tell you the whole story.  He shares a picture from his own wedding which was taken in a back alley from the wedding venue.  He says how the picture appears unusual for a wedding photo, but the photo doesn't explain that there'd been a fire alarm in the building and all the wedding guests had to gather in the back alley at this time and so they decided to take a photo of the event!  He also tells us how User Story Mapping is great for sparking conversations and helps to improve prioritisation of software development tasks.

Colin then gets the attendees that have the sticky notes and the markers to actually write out some user story tasks based upon a person's morning routine.  He states that this is an exercise from the book User Story Mapping by Jeff Patton.  Everyone is given around 5 minutes to do this and afterwards, Colin collects the sticky notes and starts to stick them onto a whiteboard.  Whilst he's doing this, he tells us that there's 3 level of tasks with User Story Mapping.  At the very top level, there's "Sea" level.  These are the user goals and each task within is atomic - i.e. you can't stop in the middle of it and do something else.  Next is Summary Level which is often represented by a cloud or a kite and this level shows greater context and is made up of many different user goals.  Finally, we have the Sub-functions, represented by a fish or a clam.  These are the individual tasks that go to make up a user goal. So an example might have a user goal (sea level) of "Take a Shower" and the individual tasks could be "Turn on shower", "Set temperature", "Get in shower", "Wash body", "Shampoo hair" etc.

After an initial arrangement of sticky notes, we have our initial User Story Map for a morning routine.  Colin then says we can start to look for alternatives.  The body of the map is filled with notes representing details and individual granular tasks, there's also variations and exceptions here and we'll need to re-organise the map as new details are discovered so that the complete map makes sense.  In a software system, the map becomes the "narrative flow" and is not necessarily in a strict order as some tasks can run in parallel.  Colin suggests using additional sticker or symbols that can be added to the sticky note to represent which teams will work on which parts of the map.

Colin says it's good to anthropomorphise the back-end systems within the overall software architecture as this helps with conversations and allows non-technical people to better understand how the component parts of the system work together.  So, instead of saying that the web server will communicate with the database server, we could say that Fred will communicate with Bob or that Luke communicates with Leia. Giving the systems names greater helps.

We now start to look at the map's "backbone".  These are the high level groups that many individual tasks will fit into.  So, for our morning routine map, we can group tasks such as "Turn off alarm" and "Get out of bed" as a grouping called "Waking up".  We also talk about scope creep.  Colin tells us that, traditionally, more sticky notes being added to a board even once the software has started to be built is usually referred to as scope creep, however, when using techniques such as User Story Mapping, it often just means that your understanding of the overall system that's required is getting better and more refined.

IMG_20171014_101304Once we've built our initial User Story Map, it's easy to move individual tasks within a group of tasks in a goal below a horizontal line which was can draw across the whiteboard.  These tasks can the represent a good minimum viable product and we simply move those tasks in a group that we deem to be more valuable, and thus required for the MVP, whilst leaving the "nice to have" tasks in the group on the other side of the line.  In doing this, it's perfectly acceptable to replace a task with a simpler task as a temporary measure, which would then be removed and replaced with the original "proper" task for work beyond MVP.  After deciding upon our MVP tasks, we can simply rinse and repeat the process, taking individual tasks from within groups and allocating them to the next version of the product whilst leaving the less valuable tasks for a future iteration. 

Colin says how this process results in producing something called "now maps" as the represent what we have, or where we're at currently, whereas what we'll often produce is "later maps", these are the maps that represent some aspect of where we want to be in the future.  Now maps are usually produced when you're first trying to understand the existing business processes that will be modelled into software.  From here, you can produce Later maps showing the iterations of the software as will be produced and delivered in the future.  Colin also mentions that we should always be questioning all of the elements of our maps, asking question such as "Why does X happen?", "What are the pain points around this process?", "What's good about the process?" and "What would make this process better?".  It's by continually asking such questions, refining the actual tasks on the map, and continually reorganising the map that we can ultimately create great software that really adds business value.

Finally, Colin shares some additional resources where we can learn more about User Story Mapping and related processes in general.  He mentions the User Story Mapping book by Jeff Patton along with The Goal by Eli Goldratt, The Phoenix Project by Gene Kim, Kevin Behr and George Spafford and finally, Rolling Rocks Downhill by Clarke Ching.

After Colin's session is over, it's time for a quick coffee break before the next session.   The individual rooms are a little distance away from the main foyer area where the coffee is served, and I realised by next session was in the same room as I was already sat!  Therefore, I decided I'm simply stay in my seat and await the next session.  This one was to be David Whitney's How Stuff Works...In C# - Metaprogramming 101.

IMG_20171014_104223David's talk is going to be all about how some of the fundamental frameworks that we use as .NET developers everyday work and how they're full of "metaprogramming".  Throughout his talk, he's going to decompose an MVC (Model View Controller) framework, a unit testing framework and a IoC (Inversion of Control) container framework to show they work and specifically to examine how they operate on the code that we write that uses and consumes these frameworks.

To start, David explains what "MetaProgramming" is.  He shares the Wikipedia definition, which in typical Wikipedia fashion, is somewhat obtuse.  However the first statement does sum it up:

"Metaprogramming is a programming technique in which computer programs have the ability to treat programs as their data."

This simply means that meta programs are programs that operate on other source code, and Meta programming is essentially about writing code that looks at, inspects and works with your own software's source code.

David says that in C#, meta programming is mostly done by using class within the System.Reflection namespace and making heavy use of things such as the Type class therein, which allows us to get all kinds of information about the types, methods and variables that we're going to be working with.  David shows a first trivial example of a meta program, which enumerates the list of types by using a call to the Assembly.GetTypes() method:

public class BasicReflector
{
	public Type[] ListAllTypesFromSamples()
	{
		return GetType().Assembly.GetTypes();
	}

	public MethodInfo[] ListMethodsOn<T>()
	{
		return typeof(T).GetMethods();
	}
}

He asks why you want to do this?  Well, it's because many of the frameworks we use (MVC, Unit Testing etc.) are essentially based on this ability to perform introspection on the code that you write in order to use them.  We often make extensive use of the Type class in our code, even when we're not necessarily aware that we're doing meta-programming but the Type class is just one part of a rich "meta-model" for performing reflection and introspection over code.  A Meta-Model is essentially a "model of your model".  The majority of methods within the System.Reflection namespace that provide this Metamodel usually end with "Info" in the method name, so methods such as TypeInfo, MethodInfo, MemberInfo and ConstructorInfo can all be used to give us highly detailed information and data about our code.

As an example, a unit testing framework at it's core is actually trivially simple.  It essentially just finds code and runs it.  It examines your code for classes decorated with a specific attribute (i.e. [TestFixture]) and invokes methods that are decorated with a specific attribute(s)(i.e. [Test]).  David says that one of his favourite coding katas is to write a basic unit testing framework in less than an hour as this is a very good exercise for "Meta Programming 101".

We look at some code for a very simple Unit Testing Framework, and there's really not a lot to it.  Of course, real world unit testing frameworks contain many more "bells-and-whistles", but the basic code shown below performs the core functionality of a simple test runner:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;

namespace ConsoleApp1
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var testFinder = new TestFinder(args);
            var testExecutor = new TestExecutor();
            var testReporter = new TestReporter();
            var allTests = testFinder.FindTests();
            foreach (var test in allTests)
            {
                TestResult result = testExecutor.ExecuteSafely(test);
                testReporter.Report(result);
            }
        }
    }

    public class TestFinder
    {
        private readonly Assembly _testDll;

        public TestFinder(string[] args)
        {
            var assemblyname = AssemblyName.GetAssemblyName(args[0]);
            _testDll = AppDomain.CurrentDomain.Load(assemblyname);
        }

        public List<MethodInfo> FindTests()
        {
            var fixtures = _testDll.GetTypes()
                .Where(x => x.GetCustomAttributes()
                    .Any(c => c.GetType()
                        .Name.StartsWith("TestFixture"))).ToList();
            var allMethods = fixtures.SelectMany(f => 
                f.GetMethods(BindingFlags.Public | BindingFlags.Instance));
            return allMethods.Where(x => x.GetCustomAttributes()
                .Any(m => m.GetType().Name.StartsWith("Test")))
                .ToList();
        }
    }

    public class TestExecutor
    {
        public TestResult ExecuteSafely(MethodInfo test)
        {
            try
            {
                var instance = Activator.CreateInstance(test.DeclaringType);
                test.Invoke(instance, null);
                return TestResult.Pass(test);
            }
            catch (Exception ex)
            {
                return TestResult.Fail(test, ex);
            }
        }
    }

    public class TestReporter
    {
        public void Report(TestResult result)
        {
            Console.Write(result.Exception == null ? "." : "x");
        }
    }

    public class TestResult
    {
        private Exception _exception = null;
        public Exception Exception { get => _exception;
            set => _exception = value;
        }

        public static TestResult Pass(MethodInfo test)
        {
            return new TestResult { Exception = null };
        }

        public static TestResult Fail(MethodInfo test, Exception ex)
        {
            return new TestResult { Exception = ex };
        }
    }
}

David then talks about the ASP.NET MVC framework.  He says that it is a framework that, in essence, just finds and runs user code, which sounds oddly familiar to a unit testing framework!  Sure, there's additional functionality within the framework, but at a basic level, the framework simply accepts a HTTP request, finds the user code for the requested URL/Route and runs that code (this is the controller action method).  Part of running that code might be the invoking of a ViewEngine (i.e. Razor) to render some HTML which is sent back to the client at the end of the action method.  Therefore, MVC is merely meta-programming which is bound to HTTP.  This is a lot like an ASP.NET HttpHandler and, in fact, the very first version of ASP.NET MVC was little more than one of these.

David asks if we know why MVC was so successful.  It was successful because of Rails.  And why was Rails successful?  Well, because it had sensible defaults.  This approach is the foundation of the often used "convention over configuration" paradigm.  This allows users of the framework to easily "fall into the pit of success" rather than the "pit of failure" and therefore makes learning and working with the framework a pleasurable experience.  David shows some more code here, which is his super simple MVC framework.  Again, it's largely based on using reflection to find and invoke appropriate user code, and is really not at all dissimilar to the Unit Testing code we looked at earlier.  We have a ProcessRequest method:

public void ProcessRequest(HttpContext context)
{
	var controller = PickController(context);
	var method = PickMethod(context, controller);
	var instance = Activator.CreateInstance(controller);
	var response = method.Invoke(instance, null);
	HttpContext.Current.Response.Write(response);
}

This is the method that orchestrates the entire HTTP request/response cycle of MVC.  And the other methods called by the ProcessRequest method use the reflective meta-programming and are very similar to what we've already seen.  Here's the PickController method, which we can see tries to find types whose names both start with a value from the route/URL and also end with "Controller".  We can also see that we use a sensible default of "HomeController" when a suitable controller can't be found:

private Type PickController(HttpContext context)
{
	var url = context.Request.Url;
	Type controller = null;
	var types = AppDomain.CurrentDomain.GetAssemblies()
					.SelectMany(x => x.GetTypes()).ToList();
	controller = types.FirstOrDefault(x => x.Name.EndsWith("Controller")
					&& url.PathAndQuery.StartsWith(x.Name)) 
					?? types.Single(x => x.Name.StartsWith("HomeController"));
	return controller;
}

Next, we move on to the deconstruction of an IoC Container Framework.  An IoC container framework is again a simple framework that works due to meta-programming and reflection.  At their core, they simply stores a dictionary of mappings of interfaces to types, and they expose a method to register this mapping, as well as a method to create an instance of a type based on a given interface.  This creation is simply a recursive call ensuring that all objects down the object hierarchy are constructed by the IoC Container using the same logic to find each object's dependencies (if any).   David shows us his own IoC container framework on one of his slides and it's only around 70 lines of code.  It almost fits on a single screen.  Of course, this is a very basic container and doesn't have all the real-world required features such as object lifetime management and scoping, but it does work and performs the basic functionality.  I haven't shown the code here as it's very similar to the other meta-programming code we've already looked at, but there's a number of examples of simple IoC containers out there on the internet, some written in only 15 lines of code!

After the demos, David talks about how we can actually use the reflection and meta-programming we've seen demonstrated in our own code as we're unlikely to re-write our MVC, Unit Testing or IoC frameworks.  Well, there's a number of ways in which such introspective code can be used.  One example is based upon some functionality for sending emails, a common enough requirement for many applications.  We look at some all too frequently found code that has to branch based upon the type of email that we're going to be sending:

public string SendEmails(string emailType)
{
	var emailMerger = new EmailMerger();
	if (emailType == "Nightly")
	{
		var nightlyExtract = new NightlyEmail();
		var templatePath = "\\Templates\\NightlyTemplate.html";
		return emailMerger.Merge(templatePath, nightlyExtract);
	}
	if (emailType == "Daily")
	{
		var dailyExtract = new DailyEmail();
		var templatePath = "\\Templates\\DailyTemplate.html";
		return emailMerger.Merge(templatePath, dailyExtract);
	}
	throw new NotImplementedException();
}

We can see we're branching conditionally based upon a string that represents the type of email we'll be processing, either a daily email or a nightly one.  However, by using reflective meta-programming, we can change the above code to something much more sophisticated:

public string SendEmails(string emailType)
{
	var strategies = new Email[] {new NightlyEmail(), new DailyEmail()};
	var selected = strategies.First(x => x.GetType().Name.StartsWith(emailType));
	var templatePath = "\\Templates\\" + selected.GetType().Name + ".html";
	return new EmailMerger().Merge(templatePath, selected);
}

IMG_20171014_112710Another way of using meta-programming within our own code is to perform automatic registrations for our DI/IoC Containers.  We often have hundreds or thousands of lines of manual registration, such as container.Register<IFoo, Foo>(); and we can simplify this by simply enumerating over all of the interfaces within our assemblies and looking for classes that implement that interface and possibly are called by the same name prefix and automatically registering the interface and type with the IoC container.  Of course, care must be taken here as such an approach may actually hide intent and is somewhat less explicit.  In this regard, David says that with the great power available to us via meta-programming comes great responsibility, so we should take care to only use it to "make the obvious thing work, not make the right thing totally un-obvious".

Finally, perhaps one of the best uses of meta-programming in this way is to help protect code quality.  We can do this by using meta-programming within our unit tests to enforce some attribute to our code that we care about.  One great example of this is to ensure that all classes within a given namespace have a specific suffix to their name.  Here's a very simple unit test that ensures that all classes in a Factories namespace have the word "Factory" at the end of the class name:

[Test]
public void MakeSureFactoriesHaveTheRightNamingConventions()
{
	var types = AppDomain.CurrentDomain
		.GetAssemblies()
		.SelectMany(a => a.GetTypes())
		.Where(x => x.Namespace == "MyApp.Factories");

	foreach (var type in types)
	{
		Assert.That(type.Name.EndsWith("Factory"));
	}
}

After David's session was over it was time for another quick coffee break.  As I had to change rooms this time, I decided to head back to the main foyer and grab a quick cup of coffee before immediately heading off to find the room for my next session.  This session was James Murphy's A Gentle Introduction To Elm.

IMG_20171014_120146James starts by introducing the Elm language.  Elm calls itself a "delightful language for reliable web apps".  It's a purely functional language that transpiles to JavaScript and is a domain-specific language designed for developing web applications.  Being a purely functional language allows Elm to make a very bold claim.  No run-time exceptions!

James asks "Why use Elm?".  Well, for one thing, it's not JavaScript!  It's also functional, giving it all of the benefits of other functional languages such as immutability and pure functions with no side effects.  Also, as it's a domain-specific language, it's quite small and is therefore relatively easy to pick up and learn.  As it boasts no run-time exceptions, this means that if your Elm code compiles, it'll run and run correctly.

James talks about the Elm architecture and the basic pattern of implementation, which is Model-Update-View.  The Model is the state of your application and it's data.  The Update is the mechanism by which the state is updated, and the View is how the state is represented as HTML.  It's this pattern that provides reliability and simplicity to Elm programs.  It's a popular, modern approach to front-end architecture, and the Redux JavaScript framework was directly inspired by the Elm architecture.  A number of companies are already using Elm in production, such as Pivotal, NoRedInk, Prezi and many others.

Here's a simple example Elm file showing the structure using the Model-Update-View pattern.  The pattern should be understandable even if you don't know the Elm syntax:

import Html exposing (Html, button, div, text)
import Html.Events exposing (onClick)

main =
  Html.beginnerProgram { model = 0, view = view, update = update }

type Msg = Increment | Decrement

update msg model =
  case msg of
    Increment ->
      model + 1

    Decrement ->
      model - 1

view model =
  div []
    [ button [ onClick Decrement ] [ text "-" ]
    , div [] [ text (toString model) ]
    , button [ onClick Increment ] [ text "+" ]
    ]

Note that the Elm code is generating the HTML that will be rendered by the browser.  This is very similar to the React framework and how it also performs the rendering for the actual page's markup.  This provides for a strongly-typed code representation of the HTML/web page, thus allowing far greater control and reasoning around the ultimate web page's markup.

You can get started with Elm by visiting the projects home page at elm-lang.org.  Elm can be installed either directly from the website, or via the Node Package Manager (NPM).  After installation, you'll have elm-repl - a REPL for Elm, elm-make which is the Elm compiler, elm-package - the Elm package manager and elm-reactor - the Elm development web server.  One interesting thing to note is that Elm has strong opinions about cleanliness and maintainability, so with that in mind, Elm enforces semantic versioning on all of it's packages!

James shows us some sample Elm statements in the Elm REPL. We see can use all the standard and expected language elements, numbers, strings, defining functions etc.  We can also use partial application, pipe-lining and lists/maps, which are common constructs within functional languages.  We then look at the code for a very simple "Hello World" web page, using the Model-Update-View pattern that Elm programs follow.  James is using Visual Studio Code to as his code editor here, and he informs us that there's mature tooling available to support Elm within Visual Studio Code.

We expand the "Hello World" page to allow user input via a textbox on the page, printing "Hello" and then the user's input.  Due to the continuous Model-Update-View loop, the resulting page is updated with every key press in the textbox, and this is controlled by the client-side JavaScript that has been transpiled from the Elm functions.  James shows this code running through the Elm Reactor development web server.  On very nice feature of Elm Reactor is that is contains built-in "time-travel" debugging, meaning that we can enumerate through each and every "event" that happens within our webpage. In this case, we can see the events that populate the "Hello <user>" text character-by-character.  Of course, it's possible to only update the Hello display text when the user has finished entering their text and presses the Enter key in the textbox, however, since this involves maintaining state, we have to perform some interesting work in our Elm code to achieve it.

James shows us how Elm can respond to events from the outside world.  He writes a simply function that will respond to system tick events to show an ever updating current time display on the web page.  James shows how we can work with remote data by defining specific types (unions) that represent the data we'll be consuming and these types are then added to the Elm model that forms the state/data for the web page.  One important thing to note here is that we need to be able to not only represent the data but also the absence of any data with specific types that represent the lack of data.  This is, of course, due to Elm being a purely functional language that does not support the concept of null.

IMG_20171014_121610The crux of Elm's processing is taking some input (in the form of a model and a message), performing the processing and responding with both a model and a message.  Each Elm file has an "init" section that deals with the input data.  The message that is included in that data can be a function, and could be something that would access a remote endpoint to gather data from a remote source.  This newly acquired data can then be processed in the "Update" section of the processing loop, ultimately for returning as part of the View's model/message output.  James demonstrates this by showing us a very simple API that he's written implementing a simply To-Do list.  The API endpoint exposes a JSON response containing a list of to-do items.  We then see how this API endpoint can be called from the Elm code by using a custom defined message that queries the API endpoint and pulls in the various to-do items, processes them and writes that data into the Elm output model which is ultimately nicely rendered on the web page.

Elm contains a number of rich packages out-of-the-box, such as a HTTP module.  This allows us to perform HTTP requests and responses using most of the available HTTP verbs with ease:

import Json.Decode (list, string)

items : Task Error (List String)
items =
    get (list string) "http://example.com/to-do-items.json"

Or:

corsPost : Request
corsPost =
    { verb = "POST"
    , headers =
        [ ("Origin", "http://elm-lang.org")
        , ("Access-Control-Request-Method", "POST")
        , ("Access-Control-Request-Headers", "X-Custom-Header")
        ]
    , url = "http://example.com/hats"
    , body = empty
    }

It's important to note, however, that not all HTTP verbs are available out-of-the-box and some verbs, such as PATCH, will need to be manually implemented.

James wraps up his session by talking about the further eco-system around the Elm language.  He mentions that Elm has it's own testing framework, ElmTest, and that you can very easily achieve a very high amount of code coverage when testing in Elm due to it being a purely functional language.  Also, adoption of Elm doesn't have to be an all-or-nothing proposition.  Since Elm transpiles to JavaScript, it can play very well with existing JavaScript applications.  This means that Elm can be adopted in a piece meal fashion, with only small sections of a larger JavaScript application being replaced by their Elm equivalent, perhaps to ensure high code coverage or to benefit from improved robustness and reduced possibility of errors.

Finally, James talks about how to deploy Elm application when using Elm in a real-world production application.  Most often, Elm deployment is performed using WebPack, a JavaScript module bundler.  This often takes the form of shipping a single small HTML file containing the necessary script inclusions for it to bootstrap the main application.

IMG_20171014_131053After James' session was over, it was time for lunch.  All the attendees made there way back to the main foyer area where a delicious lunch of a selection of sandwiches, fruit, crisps and chocolate was available to us.  As is customary at the various DDD events, there were to be a number of grok talks taking place over the lunch period.  As I'd missed the grok talks at the last few DDD events I'd attended, I decided that I'd make sure I aught a few of the talks this time around.

I missed the first few talks as the queue for lunch was quite long and it took a little while to get all attendees served, however, after consuming my lunch in the sunny outdoors, I headed back inside to the large lecture theatre where the grok talks were being held.  I walked in just to catch the last minute of Phil Pursglove's talk on Azure's CosmosDB, which is Microsoft's globally distributed, multi-model database.  Unfortunately, I didn't catch much more than that, so you'll have to follow the link to find out more. (Update:  Phil has kindly provided a link to a video recording of his talk!)

The next grok talk was Robin Minto's OWASP ZAP FTW talk.  Robin introduces us to OWASP, which is the Open Web Application Security Project and exists to help create a safer, more secure web.  Robin then mentions ZAP, which is a security testing tool produced by OWASP.  ZAP is the Zed Attack Proxy and is a vulnerability scanner and intercepting proxy to help detect vulnerabilities in your web application.  Robin shows us a demo application he's built containing deliberate flaws, Bob's Discount Diamonds.  This is running on his local machine.  He then shows us a demo of the OWASP ZAP tool and how it can intercept all of the requests and responses made between the web browser and the web server, analysing those responses for vulnerabilities and weaknesses.  Finally, Robin shows us that the OWASP ZAP software contains a handy "fuzzer" capability which allows it to replay requests using lists of known data or random data - i.e. can replay sending login requests with different usernames/passwords etc.

The next grok talk was an introduction to the GDPR by John Price.  John introduces the GDPR, which is the new EU wide General Data Protection Regulations and effectively replaced the older Data Protection Act in the UK.  GDPR, in a nutshell, means that users of data (mostly companies who collect a person's data) need to ask permission from the data owner (the person to whom that data belongs) for the data and for what purpose they'll use that data.  Data users have to be able to prove that they have a right to use the data that they've collected.  John tells us that adherence to the GDPR in the UK is not affected by Brexit as it's already enshrined in UK law and has been since April 2016, although it's not really been enforced  up to this point.  It will start to be strictly enforced from May 2018 onwards.  We're told that, unlike the previous Data Protection Act, violations of the regulations carry very heavy penalties, usually starting at 20 million Euros or 4% of a company's turnover.  There will be some exceptions to the regulations, such as police and military but also exception for private companies too, such as a mobile phone network provider giving up a person's data due to "immediate threat to life".  Some consent can be implied, so for example, entering your car's registration number into a web site for the purposes of getting an insurance quote is implied permission to use the registration number that you've provided, but the restriction is that the data can only be used for the specific purpose for which it was supplied.  GDPR will force companies to declare if data is sent to third parties.  If this happens, the company initially taking the data and each and every third-party that receives that data have to inform the owner of the data that they are in possession of the data.  GDPR is regulated by the Information Commissioners Office in the UK.  Finally, John says that the GDPR may make certain businesses redundant.  He gives an example industry of credit reference agencies.  Their whole business model is built on non-consentual usage of data, so it will be interesting to see how GDPR affects industries like these.

After John's talk, there was a final grok talk however, I needed a quick restroom break before the main sessions of the afternoon, so headed off for my restroom break before making my way back to the room for the first of the afternoon's sessions.  This was Matt Ellis's How To Parse A File.

IMG_20171014_142833Matt starts his session by stating that his talk is all about parsing files, but he immediately says, "But, Don't do it!"  He tells us that it's a solved problem and we really shouldn't be writing code to parsing files by hand for ourselves and should just use one of the many excellent libraries out there instead.  Matt does discuss why you decide you really needed to parse files for yourself.  Perhaps you need better speed and efficiency or maybe it's to reduce dependencies or to parse highly specific custom formats.  It could even be parsing for things that aren't even files such as HTTP headers, standard output etc.  From here, Matt mentions that he works for JetBrains and that the introduction of simply parsing a file is a good segue into talking about some of the features that can be found inside many of JetBrains' products.

Matt starts by looking at the architecture of many of JetBrains' IDE's and developer tools such as ReSharper.  They're build with a similar architecture and they all rely on a layer that they call the PSI layer.  The PSI layer is responsible for parsing, lexing and understanding the user code that the tool is working on.  Matt says that he's going to use the Unity framework to show some examples throughout his session and that he's going to attempt to build up a syntax tree for his Unity code.  We first look at a hand-rolled parser, this one is attempting to understand the code by observing each character at a time.  It's a very laborious approach and prone to error, so this is an approach to parsing that we shouldn't use.  Matt tells us that the best approach, which has been "solved" many time in the past is ti employ the services of a lexer.  This is a processor that turns the raw code into meaningful tokens based upon the words and vocabulary of the underlying code or language and gives structure to those tokens.  It's from the output of the lexer that we can more easily and robustly perform the parsing.  Lexers are another solved problem, and many lexers already exist for popular programming languages such as lex, CsLex, FsLex, flex, JFlex and many more.

Lexers generate source code, but it's not human readable code.  It's similar to how .NET language code (C# or VB.NET) is first compiled to Intermediate Language prior to being JIT'ed at runtime.  The code output from the lexer is read by the parser and from there the parser can try to understand the grammar and structure of the underlying code via syntactical analysis.  This often involved the use of Regular Expressions in order to match specific tokens or sets of tokens.  This works particularly well as Regular Expressions can be translated into a state machine and from there, translated into a transition table.  Parsers understand the underlying code that they're designed to work on, so for example, a parser for C# would know that in a class declaration, there would be the class name which would be preceded by a token indicating the scope of the class (public, private etc).  Parsing is not a completely solved problem. It's more subjective, so although solutions exist, they're more disparate and specific to the code or language that they're used for and therefore, they're not a generic solution.

Matt tells us how parsing can be done either top-down or bottom-up. Top down parsing starts at highest level construct of the language, for example at a namespace or class level in C#, and it then works it's way down to the lower level constructs from there - through methods and the code and locally scoped variables in those methods.  Bottom up parsing works the opposite way around, starting with the lower level constructs of the language and working back up to the class or namespace.  Bottom up parsers can be beneficial over top-down ones as they have the ability to utilise shift-reduce algorithms to simplify code as it's being parsed.  Parsers can even be "parser combinators". These are parsers built from other, simpler, parsers where the input the next parser in the chain is the output from the previous parser in the chain, more formally known as recursive-descendant parsing.  .NET's LINQ acts in a similar way to this.  Matt tells us about an F# parser combinator called FParsec and a C# parser is sort of like this. FParsec is a parser combinator for F# along with a C# parser combinator called Sprache, itself relying heavily on LINQ:

Parser<string> identifier =
    from leading in Parse.WhiteSpace.Many()
    from first in Parse.Letter.Once()
    from rest in Parse.LetterOrDigit.Many()
    from trailing in Parse.WhiteSpace.Many()
    select new string(first.Concat(rest).ToArray());

var id = identifier.Parse(" abc123  ");

Assert.AreEqual("abc123", id);

Matt continues by asking us to consider how parsers will deal with whitespace in a language.  This is not always as easy as it sounds as some languages, such as F# or Python, use whitespace to give semantic meaning to their code, whilst other languages such as C# use whitespace purely for aesthetic purposes.  In dealing with whitespace, we often make use of a filtering lexer.  This is a simple lexer that specific detects and removes whitespace prior to parsing.  The difficulty then is that, for languages where  whitespace is significant, we need to replace the removed whitespace after parsing.  This, again, can be tricky as the parsing may alter the actual code (i.e. in the case of a refactoring) so we must again be able to understand the grammar of the language in order to re-insert whitespace into the correct places.  This is often accomplished by building something known as a Concrete Parse Tree as opposed to the more normal Abstract Syntax Tree.  Concrete Parse Tree's work in a similar way to a C# Expression Tree, breaking down code into a hierarchical graph of individual code elements.

IMG_20171014_144725Matt tells us about other uses for Lexers such as the ability to determine specific declarations in the language.  For example, in F# typing 2. would represent a floating point number, where as typing [2..0] would represent a range.  When the user is only halfway through typing, how can we know if they require a floating point number or a range?  Also such things as comments within comments, for example: /* This /* is */ valid */  This is something that lexers can be good at, at such matching is difficult to impossible with regular expressions.

The programs that use lexers and parsers can often have very different requirements, too.  Compilers using them will generally want to compile the code, and so they'll work on the basis that the program code that they're lexing/parsing is assumed correct, whilst IDE's will take the exact opposite approach.  After all, most of the time whilst we're typing, our code is in an invalid state.  For those programs that assume the code is in an invalid state most of the time, they often use techniques such as error detection and recovery.  This is, for example, to prevent your entire C# class from being highlighted as invalid within the IDE just because the closing brace character is missing from the class declaration.  They perform error detection on the missing closing brace, but halt highlighting of the error at the first "valid" block of code immediately after the matching opening brace.  This is how the Visual Studio IDE is able to only highlight the missing closing brace as invalid and not the entire file full of otherwise valid code.  In order for this to be performant, lexers in such programs will make heavy use of caching to prevent having to continually lex the entire file with every keystroke.

Finally, Matt talks about how JetBrains often need to also deal with "composable languages".  These are things like ASP.NET MVC's Razor files, which are predominantly comprised of HTML mark-up, but which can also contain "islands" of C# code.  For this, we take a similar approach to dealing with whitespace in that the file is lexed for both languages, HTML and C# and the HTML is temporarily removed whilst the C# code is parsed and possibly altered.  The lexed tokens from both the C# and the preserved HTML are then re-combined after the parsing to re-create the file.

After Matt's session, there was one final break before the last session of the day.  Since there was, unfortunately, no coffee served at this final break, I made my way directly to the room for my next and final session, Joe Stead's .NET Core In The Real World.

IMG_20171014_154513Joe starts his talk by announcing that there's no code or demos in his talk, and that his talk will really just be about his own personal experience of attempting to migrate a legacy application to .NET Core.  He says that he contemplated doing a simple "Hello World" style demo for getting started with .NET Core, but that it would give a false sense of .NET Core being simple.  In the real-world, and when migrating an older application, it's a bit more complicated than that.

Joe mentions the .NET Standard and reminds us that it's a different thing than .NET Core.  .NET Core does adhere to the .NET Standard and Joe tells us that .NET Standard is really just akin to Portable Class Libraries Version 2.0.

Joe introduces the project that he currently works on at his place of employment.  It's a system that started life in 2002 and was originally built with a combination of Windows Forms applications and ASP.NET Web Forms web pages, sprinkled with Microsoft AJAX JavaScript.   The system was in need of being upgraded in terms of the technologies used, and so in 2012, they migrated to KnockoutJS for the front-end websites, and in 2013, to further aid with the transition to KnockoutJS, they adopted the NancyFX framework to handle the web requests.  Improvements in the system continued and by 2014 they had started to support the Mono Framework and had moved from Microsoft SQL Server to a PostgreSQL database.  This last lot of technology adoptions was to support their growing demand for a Linux version of their application from their user base.  The adoptions didn't come without issues, however, and by late 2014, they started to experience serious segfaults in their application.  After some diagnosis after which they never did fully get to the bottom of the root cause of the segfaults, they decided to adopt Docker in 2015 as a means of mitigating the segfault problem.  If one container started to display problems associated with segfaults, they could kill the container instance and create a new one.  By this point, they were in 2015 and decided that they'd start to now look into .NET Core.  It was only in Beta at this time, but were looking for a better platform that Mono that might provide some much needed stability and consistency across operating systems.  And since they were on such a roll with changing their technology stacks, they decided to move to Angular2 on the front-end, replacing KnockoutJS in 2016 as well!

By 2017, they'd adopted .NET Core v1.1 along with RabbitMQ and Kubernetes.  Joe states that the reason for .NET Core adoption was to move away from Mono.  By this point, they were not only targeting Mono, but a custom build of Mono that they'd had to fork in order to try to fix their segfault issues.  They needed much more flexible deployments such as the ability to package and deploy multiple versions of their application using multiple different versions of the underlying .NET platform on the same machine.  This was problematic in Mono, as it can be in the "full" .NET Framework, but one of the benefits of .NET Core is the ability to package the run-time with your application, allowing true side-by-side versions of the run-time to exist for different applications on the same machine.

Joe talks about some of the issues encountered when adopting and migrating to .NET Core.  The first issue was missing API's.  .NET Core 1.0 and 1.1 were built against .NET Standard 1.x and so many API's and namespace were completely missing.  Joe also found that many NuGet packages that his solution was dependent upon were not yet ported across to .NET Core.  Joe recalls that testing of the .NET Core version of the solution was a particular challenge as few other people had adopted the platform and the general response from Microsoft themselves was that "it's coming in version 2.0!".  What really helped save the day for Joe and his team was that .NET Core itself and many of the NuGet packages were open source.  This allowed them to fork many of the projects that the NuGet packages were derived from and help with transitioning them to support .NET Core.  Joe's company even employed a third party to work full time on helped to port NancyFX to .NET Core. 

Joe now talks about the tooling around .NET Core in the early days of the project.  We examine how Microsoft introduced a whole new project file structure, moving away from XML representation in the .csproj files, and moving to a JSON representation with project.json.  Joe explains how they had to move their build script and build tooling to the FAKE build tool as a result of the introduction of project.json.  There were also legal issues around using the .NET Core debugger assemblies in tools other than Microsoft's own IDE's, something that the JetBrain's Rider IDE struggled with.  We then look at tooling in the modern world of .NET Core and project.json has gone away, and reverted back to the .csproj files although they're much more simplified and improved.  This allows the use of MSBuild again, however, FAKE itself now has native support for .NET Core.  The dotnetCLI tool has improved greatly and the legal issues around the use of the .NET Core debugging assemblies has been resolved, allowing third-party IDE's such as JetBrain's Rider to use them again. 

IMG_20171014_161612Joe also mentions how .NET Core now, with the introduction of version 2.0, is much better than the Mono Framework when it comes to targeting multiple run-times.  He also mentions issues that plagued their use of libcurl on the Mac platform when using .NET Core 1.x, but that these have now been resolved in .NET Core 2.0 as .NET Core 2.0 now uses the native macOS implementation rather than trying to abstract that and use it's own implementation.

Joe moves on to discuss something that's not really specific to .NET Core, but is a concern when developing code to be run on multiple platforms.  He shows us the following two lines of code:

TimeZoneInfo.FindSystemTimeZoneById("Eastern Standard Time");
TimeZoneInfo.FindSystemTimeZoneById("America/New_York");

He asks which is the "correct" one to use.   Well, it turns out that they're both correct.  And possibly incorrect!   The top line works on Windows, and only on Windows, whilst the bottom line works on Linux, and not on Windows.  It's therefore incredibly important to understand such differences when targeting multiple platforms with your application.  Joe also says how, as a result of discrepancies such as the timezone issue, the tooling can often lie.  He recalls a debugging session where one debugging window would show the value of a variable with one particular date time value, and another debugging window - in the exact same debug session - would interpret and display the same variable with an entirely different date time value.  Luckily, most of these issues are now largely resolved with the stability that's come from recent versions of .NET Core and the tooling around it.

In wrapping up, Joe says that, despite the issues they encountered, moving to .NET Core was the right thing for him and his company.  He does say, though, that for other organisations, such a migration may not be the right decision. Each company, and each application, needs to be evaluated for migration to .NET Core on it's own merits.  For Joe's company, the move to .NET Core allowed them to focus attention elsewhere after migration.  They've since been able to adopt Kubernetes. They've been able to improve and refactor code to implement better testing and many more long overdue improvements.  In the last month, they've migrated again from .NET Core 1.1 to .NET Core 2.0 which was a relatively easy task after the initial .NET Core migration.  This one only involved the upgrading of a few NuGet packages and that was it.  The move to .NET Core 2.0 also allowed them to re-instate lots of code and functionality that had been temporarily removed thanks to the new, vastly increased API surface area of .NET Core 2.0 (really, .NET Standard 2.0).

IMG_20171014_165613After Joe's session, it was time for all the attendees to gather in the main foyer area of the university building for the final wrap-up and prize draws.  After thanking to sponsors, the venue, and the organisers and volunteers, without whom of course, events such as DDD simply wouldn't be able to take place, we moved onto the prize draw.  Unfortunately, I wasn't a winner, however, the day had been brilliant.

IMG_20171014_132318Another wonderful DDD event had been and gone but a great day was had by all.  We were told that the next DDD Event was to be a DDD Dublin, held sometime around March 2018.  So there's always that to look forward to.

DDD North 2016 In Review

IMG_20161001_083505

On Saturday, 1st October 2016 at the University of Leeds, the 6th annual DDD North event was held.  After a great event last year, at the University of Sunderland in the North East, this year’s event was held in Leeds as is now customary for the event to alternate between the two locations each year.

After arriving and collecting my badge, it was a short walk to the communal area for some tea and coffee to start the day.  Unfortunately, there were no bacon butties or Danish pastries this time around, but I’d had a hearty breakfast before setting off on the journey to Leeds anyway.

The first session of the day was Pete Smith’sThe Three Problems with Software Development”.   Pete starts by talking about Conway’s Game of Life and how this game is similar to how software development often works, producing complex behaviours from simple building blocks.  Pete says how his talk will examine some “heuristics” for software development, a sort of “series of steps” for software development best practice.

IMG_20161001_093811Firstly, we look at the three problems themselves.  Problem Number 1 is about dividing and breaking down software components.  Pete tells us that this isn’t just code or software components themselves, but can also relate to people and teams and how they are “broken down”.  Problem Number 2 is how to choose effective tools, processes and approaches to your software development and Problem Number 3 is effective communication.

Looking at problem number 1 in more detail, Pete talks about “reasons for change”.  He says that we should always endeavour to keep things together that need to change together.  He shows an example of two simple web pages of lists of teachers and of students.  The ASP.NET MVC view’s mark-up for both of these view is almost identical.  As developers we’d be very tempted to abstract this into a single MVC view and only alter, using variables, the parts that differ between teachers and students, however, Pete suggests that this is not a good approach.  Fundamentally, teachers and students are not the same thing, so even if the MVC views are almost identical and we have some amount of repetition, it’s good to keep them separate – for example, if we need to add specific abilities to only one of the types of teachers or students, having two separate views makes that much easier.

Nest we look at how we can best identify reasons for change.  We should look at what parts of an application get deployed together, we should also look at the domain and the terminology used – are two different domain entities referred to by the same name?  Or two different names for the same entity?  We should consider the “ripple effect” of change – what something changes, what else has to change?  Finally, the main thing to examine is logic vs intent.  Logic is the code and behaviour and can (and should) be refactored and reused, however, intent should never be reused or refactored (in the previous example, the teachers and students were “intents” as they represent two entirely different things within the domain).

In looking at Problem Number 2 is more details, Pete says that we should promote good change practices.  We should reduce coupling at all layer in the application and the entire software development process, but don’t over-abstract.  We need to have strong test coverage for this when done in the software itself.  Not necessarily 100% test coverage, but a good suite of robust tests.  Pete says that in large organisations we should try to align teams with the reasons for change, however, in smaller organisations, this isn’t something that you’d need to worry about to much as the team will be much smaller anyway.

Next, Pete makes the strong suggestion that MVC controllers that do very little - something generally considered to be a good thing - is “considered harmful”!  What he really means is that blanket advice is considered harmful – controllers should, generally, do as little as they need to but they can be larger if they have good reasons for it.  When we’re making choices, it’s important to remain dogmatic.  Don’t forget about the trade-offs and don’t get taken in by the “new shiny” things.  Most importantly, when receiving advice, always remember the context of the advice.  Use the right tool for the job and always read differing viewpoints for any subject to gain a more rounded understanding of the problem.  Do test the limits of the approaches you take, learn from your mistakes and always focus on providing value.

In examining Problem Number 3, Pete talks about communication and how it’s often impaired due to the choice of language we use in software development.  He talks about using the same names and terminology for essentially different things.  For example, in the context of ASP.NET MVC, we have the notion of a “controller”, however, Angular also has the notion of a “controller” and they’re not the same thing.  Pete also states how terminology like “serverless architecture” is a misnomer as it’s not serverless and how “devops”, “agile” etc. mean different things to different people!  We should always say what we mean and mean what we say! 

Pete talks about how code is communication.  Code is read far more often than it’s written, so therefore code should be optimized for reading.  Pete looks at some forms of communication and states that things like face-to-face communication, pair programming and even perhaps instant messaging are often the better forms of communication rather than things like once-a-day stand-ups and email.  This is because the best forms of communication offer instant feedback.  To improve our code communication, we should eliminate implicit knowledge – such as not refactoring those teacher and student views into one view.  New programmers would expect to be able to find something like a TeacherList.cshtml file within the solution.  Doing this helps to improve discovery, enabling new people to a codebase to get up to speed more quickly.  Finally, Pete repeats his important point of focusing on refactoring the “logic” of the application and not the “intent”.

Most importantly, the best thing we can do to communicate better is to simply listen.  By listening more intently, we ensure that we have the correct information that we need and we can learn from the knowledge and experience of others.

IMG_20161001_103012After Pete’s talk it was time to head back to the communal area for more refreshments.  Tea, coffee, water and cans of coke were all available.  After suitable further watering, it was time to head back to the conference rooms for the next session.  This one was John Stovin’sThinking Functionally”.

John’s talk was held in one of the smaller rooms and was also one of the rooms located farthest away from the communal area.  After the short walk to the room, I made it there with only a few seconds to spare prior to the start of the talk, and it was standing room only in the room!

John starts his talk by mentioning how the leap from OO (Object-Oriented) programming to functional programming is similar to the leap from procedural programming to OO itself.  It’s a big paradigm shift!  John mentions how most of today’s non-functional languages are designed to closely mimic the way the computer itself processes machine code in the “von Neumann” style.  That is to say that programs are really just a big series of steps with conditions and branches along the way.  Functional programming helps in the attempt to break free from this by expressing programs as pure functions – a series of functions, similar to mathematical functions, that take an input and produce an output.

John mentions how, when writing functional programs, it’s important to try your best to keep your functions “pure”.  This means that the function should have no side-effects.  For example a function that writes something to the console is not pure, since the side-effect is the output on the console window.  John states that even throwing an exception from a function is a side-effect in itself!

IMG_20161001_104700

We should also endeavour to always keep our data immutable.  This means that we never try to assign a new value to a variable once it has already been initialized with a value – it’s a single assignment.  Write once but read many.  This helps us to reason about our data better as it improves readability and guarantees thread-safety of the data.  To change data in a functional program, we should perform an atomic “copy-and-modify” operation which creates a copy of the data,  but with our own changes applied.

In F#, most variables are immutable by default, and F# forces you to use a qualifier keyword, mutable, in order to make a variable mutable.  In C#, however, we’re not so lucky.  We can “fake” this, though, by wrapping our data in a type (class) – i.e. a money type, and only accepting values in the type’s constructor, ensuring all properties are either read-only or at least have a private setter.  Class methods that perform some operation on the data should return a whole new instance of the type.

We move on to examine how Functional Programming eradicates nulls.  Variables have to be assigned a value at declaration, and due to not being able to reassign values thanks to immutability, we can’t create a null reference.  We’re stuck with nulls in C#, but we can alleviate that somewhat via the use of such techniques as the Null Object Pattern, or even the use of an Option<T> type.  John continues saying that types are fundamental to F#.  It has real tuple and records – which are “multiplicative” types and are effectively aggregates of other existing types, created by “multiplying” those existing types together – and also discriminating unions which are “additive” types which are created by “summing” other existing types together.  For example, the “multiplicative” types aggregate or combine other types – a Tuple can contain two (or more) other types which are (e.g.) string and int, and a discriminated union, as an “additive” type, can act as the sum total of all of it’s constituent types, so a discriminated union of an int and a boolean can represent all of the possible values of an int AND all of the possible values of a boolean.

John continues with how far too much C# code is written using granular primitive types and that in F#, we’re encouraged to make all of our code based on types.  So, for example, a monetary amount shouldn’t be written as simply a variable of type decimal or float, but should be wrapped in a strong Money type, which can enforce certain constraints around how that type is used.  This is possible in C# and is something we should all try to do more of.  John then shows us some F# code declaring an F# discriminated union:

type Shape =
| Rectangle of float * float
| Circle of float

He states how this is similar to the inheritance we know in C#, but it’s not quite the same.  It’s more like set theory for types!

IMG_20161001_111727John continues by discussing pattern matching.  He says how this is much richer in F# than the kind-of equivalent if() or switch() statements in C# as pattern matching can match based upon the general “shape” of the type.  We’re told how functional programming also favours recursion over loops.  F#’s compiler has tail recursion, where the compiler can re-write the function to pass additional parameters on a recursive call and therefore negate the need to continually add accumulated values to the stack as this helps to prevent stack overflow problems.   Loops are problematic in functional programming as we need a variable for the loop counter which is traditionally re-assigned to with every iteration of the loop – something that we can’t due in F# due to variable immutability.

We continue by looking at lists and sequences.  These are very well used data structures in functional programming.  Lists are recursive structures and are either an empty list or a “head” with a list attached to it.  We iterate over the list by taking the “head” element with each pass – kind of like popping values off a stack.  Next we look at higher-order functions.  These are simply functions that take another function as a parameter, so for example, virtually all of the LINQ extension methods found in C# are higher-order functions (i.,e. .Where, .Select etc.) as these functions take a lambda function to act as a predicate.  F# has List and Seq and the built-in functions for working with these are primarily Filter() and Map().  These are also higher-order functions.  Filter takes a predicate to filter a list and Map takes a Func that transforms each list element from one type to another.

John goes on to mention Reactive Extensions for C# which is a library for composing asynchronous and event-based programs using observable sequences and LINQ-style query operators.  These operators are also higher-order functions and are very “functional” in their architecture.  The Reactive Extensions (Rx) allow composability over events and are great for both UI code and processing data streams.

IMG_20161001_113323John then moves on to discuss Railway-oriented programming.  This is a concept whereby all functions both accept and return a type which is of type Result<TSuccess, TFailure>.  All functions return a “Result<T,K>” type which “contains” a type that indicates either success or failure.  Functions are then composable based upon the types returned, and execution path through code can be modified based upon the resulting outcome of prior functions.

Using such techniques as Railway-oriented programming, along with the other inherent features of F#, such as a lack of null values and immutability means that frequently programs are far easier to reason about in F# than the equivalent program written in C#.  This is especially true for multi-threaded programs.

Finally, John recaps by stating that functional languages give a level of abstraction above the von Neumann architecture of the underlying machine.  This is perhaps one of the major reasons that FP is gaining ground in recent years as machine are now powerful enough to allow this (previously, old-school LISP programs – LISP being one of the very first functional languages originally design back in 1958 - often used purpose built machines to run LISP sufficiently well).  John recommends a few resources for further reading – one is the F# for Fun and Profit website.

After John’s session, it was time for a further break and additional refreshment.  Since John’s session had been in a small room and one which was farthest away from the communal area where the refreshments where, and given that my next session was still in this very same conference room, I decided that I’d stay where I was and await the next session, which was Matteo Emili’s “I Read The Phoenix Project And I Loved It. Now What?”

IMG_20161001_115558

Matteo’s session was all about introducing a “devops” culture into somewhere that doesn’t yet have such a culture.  The Phoenix Project is a development “novel” which tells a story of doing just such a thing.  Matteo starts by mentioning The Phoenix Project book and how it’s a great book.  I  must concur that the book is very good, having read it myself only a few weeks before attending DDD North.  Matteo that asks that, if we’d read the book and would like to implement it’s ideas into our own places of work, we should be very careful.  It’s not so simple, and you can’t change an entire company overnight, but you can start to make small steps towards the end goal.

There are three critical concepts that cause failure and a breakdown in an effective devops culture.  They are bottlenecks, lack of communication and boundaries between departments.  In order to start with the introduction of a devops culture, you need to start “out-of-band”.  This means you’ll need to do something yourself, without the backing of your team, in order to prove a specific hypothesis.  Only when you’re sure something will work should you then introduce the idea to the team.

Starting with bottlenecks, the best way to eliminate them is to automate everything that can be automated.  This reduces human error, is entirely repeatable, and importantly frees up time and people for other, more important, tasks.  Matteo reminds us that we can’t change what we can’t measure and in the loop of “build-measure-learn”, the most important aspect is measure.  We measure by gathering metrics on our automations and our process using logging and telemetry and it’s only from these metrics will we know whether we’re really heading in the right direction and what is really “broken” or needs improvement.  We should gather insights from our users as well by utilising such tools and software as Google Analytics, New Relic, Splunk & HockeyApp for example.  Doing this leads to evidence based management allowing you to use real world numbers to drive change.

IMG_20161001_120736

Matteo explains that resource utilisation is key.  Don’t bring a whole new change management process out of the blue.  Use small changes that generate big wins and this is frequently done “out-of-band”.  One simple thing that can be done to help break down boundaries between areas of the company is a company-wide “stand up”.  Do this once a week, and limit it to 1-2 minutes per functional area.  This greatly improves communication and helps areas understand each other better.  The implementation of automation and the eradication of boundaries form the basis of the road to continuous delivery. 

We should ensure that our applications are properly packaged to allow such automation.  MSDeploy is such a tool to help enable this.  It’s an old technology, having first been released around 2003, but it’s seeing a modern resurgence as it can be heavily utilised with Azure.  Use an infrastructure-as-code approach.  Virtual Machines, Servers, Network topology etc. should all be scripted and version controlled.  This allows automation.  This is fair easy to achieve with cloud-based infrastructure in Azure by using Azure ARM or by using AWS CloudFormation with Amazon Web Services.  Some options for achieving the same thing with on-premise infrastructure are Chef, Puppet or even Powershell Desired State Configuration.  Databases are often neglected with regard to DevOps scenarios, however, by using version control and performing small, incremental changes to database infrastructure and the usage of packages (such as SQL Server’s DACPAC files), this can help to move Database Lifecycle Management into a DevOps/continuous delivery environment.

This brings us to testing.  We should use test suites to ensure our scripts and automation is correct and we must remember the golden rule.  If something is going to fail, it must fail fast.  Automated and manual testing should be used to ensure this.  Accountability is important so tests are critical to the product, and remediation (recovery from failure) should be something that is also automated.

Matteo summarises, start with changing people first, then change the processes and the tools will follow.  Remember, automation, automation, automation!  Finally, tackle the broader technical side and blend individual competencies to the real world requirements of the teams and the overall business.

IMG_20161001_083329After Matteo’s session, it was time for lunch.  All of the attendees reconvened in the communal area where we were treated to a selection of sandwiches and packets of crisps.  After selecting my lunch, I found a vacant spot in the corner of the rather small communal area (which easily filled to capacity once all of the different sessions had finished and all of the conference’s attendees descending on the same space) to eat it.  Since lunch break was 1.5 hours and I’d eaten my lunch within the first 20 minutes, I decided to step outside to grab some fresh air.  It was at this point I remembered a rather excellent little pub just 2 minutes walk down the road from the university venue hosting the conference.  Well, never one to pass up the opportunity of a nice pint of real ale, I heading off down the road to The Pack Horse.

IMG_20161001_133433Once inside, I treated myself to lovely pint of Laguna Seca from a local brewery, Burley Street Brewhouse, and settled down in the quiet pub to enjoy my pint and reflect on the morning’s sessions.  During the lunch break, there are usually some grok talks being held, which are are 10-15 minute long “lightning” talks, which attendees can watch whilst they enjoy their lunch.  Since DDD North was held very close to the previous DDD Reading event (only a matter of a few weeks apart) and since the organisers were largely the same for both events, I had heard that the grok talks would be largely the same as those that had taken place, and which I’d already seen, at DDD Reading only a matter of weeks prior.  Due to this, I decided the pub was a more attractive option over the lunch time break!

After slowly drinking and savouring my pint, it was time to head back to the university’s mechanical engineering department and to the afternoon sessions of DDD North 2016.

The afternoon’s first session was, luckily, in one of the “main” lecture halls of the venue, so I didn’t have too far to travel to take my seat for Bart Read’sHow To Speed Up .NET & SQL Server Apps”.

Bart’s session is al about performance.  Performance of our application’s code and performance of the databases that underlie our application.  Bart starts by introducing himself and states that, amongst other things, he was previously an employee of Red Gate, who make quite a number of SQL Server tools so paying close attention to performance monitoring in something that Bart has done for much of his career.

IMG_20161001_142359He states that we need to start with measurement.  Without this, we can’t possibly know where issues are occurring within our application.  Surprisingly, Bart does say that when starting to measure a database-driven application, many of the worst areas are not within the code itself, and are almost always down in the database layer.  This may be from an errant query or general lack of helpful database additions (better indexes etc.)

Bart mentions the tools that he himself uses as part of his general “toolbox” for performance analysis of an application stack.  ANTS Memory Profiler from Red Gate will help analyse memory consumption issues.  dotMemory from JetBrains is another good choice in the same area.  ANTS Performance Profiler from Red Gate will help analyse the performance of .NET code and monitor it’s CPU consumption.  Again, JetBrains have dotTrace in the same space.  There’s also the lesser known .NET Memory Profiler which is a good option.  For network monitoring, Bart uses Wireshark.  For general testing tools, Bart recommends BlazeMeter (for load testing) and Neustar.

Bart also stresses the importance of the ongoing usage of production monitoring tools.  Services such as New Relic, AppDynamics etc. can provide ongoing metrics for your running application when it’s live in production and are invaluable to understand exactly how your application is behaving in a production environment.

arithabortBart shares a very handy tip regarding usage of SQL Server Management Studio for general debugging of SQL Server queries.  He states that we should always UNCHECK the SET ARITHABORT option inside SSMS’s options menu.  Doing this prevents SQL Server from aborting any queries that perform arithmetic overflows or divide-by-zero operations, meaning that your query will continue to run, giving you a much clearer picture of what the query is actually doing (and how long it takes to run).

From here, Bart shares with us 3 different real-world performance scenarios that he has been involved in, how he went about diagnosing the performance issues and how he fixed them.

The first scenario was helping a client’s customer support team who were struggling as it was taking them 40 seconds to retrieve one of their customer’s details from their system when on a support phone call.  The architecture of the application was a ASP.NET MVC web application in C# and using NHibernate to talk to 2 different SQL Server instances - one server was a primary and the other, a linked server.

Bart started by using ANTS Performance Profiler on the web layer and was able to highlight “hotspots” of slow running code, precisely in the area where the application was calling out to the database.  From here, Bart could see that one of the SQL queries was taking 9 seconds to complete.  After capturing the exact SQL statement that was being sent to the database, it was time to fire up SSMS and use SQL Server Profiler in order to run that SQL statement and gain further insight into why it was taking so long to run.

IMG_20161001_144719After some analysis, Bart discovered that there was a database View on the primary SQL Server that was pulling data from a table on the linked server.  Further, there was no filtering on the data pulled from the linked server, only filtering on the final result set after multiple tables of data had been combined.  This meant that the entire table’s data from the linked server was being pulled across the network to the primary server before any filtering was applied, even though not all of the data was required (the filtering discarded most of it).  To resolve the problem, Bart added a simple WHERE clause to the data that was being selected from the linked server’s table and the execution time of the query went from 9 seconds to only 100 milliseconds!

Bart moves on to tell us about the second scenario.   This one had a very similar application architecture as the first scenario, but the problem here was a creeping increase in memory usage of the application over time.  As the memory increased, so the performance of the application decreased and this was due to the .NET garbage collector having to examine more and more memory in order to determine which objects to garbage collect.  This examination of memory takes time.  For this scenario, Bart used ANTS Memory Profiler to show specific objects that were leaking memory.  After some analysis, he found it was down to a DI (dependency injection) container (in this case, Windsor) having an incorrect lifecycle setting for objects that it created and thus these objects were not cleaned up as efficiently as they should have been.  The resolution was to simply configure the DI container to correctly dispose of unneeded objects and the excessive memory consumption disappeared.

IMG_20161001_150655From here, we move onto the third scenario.  This was a multi-tenanted application where each customer had their own database.  It was an ASP.NET Web application but used a custom ADO layer written in C++ to access the database.  Bart spares us the details, but tells us that the problem was ultimately down to locking, blocking and deadlocking in the database.  Bart uses this to remind us of the various concurrency levels in SQL Server.  There’s object level concurrency and row level concurrency, and when many people are trying to read a row that’s concurrently being written to, deadlocks can occur.  There’s many different solution available for this and one such solution is to use a READ COMMITED SNAPSHOT isolation level on the database.  This uses TempDB to help “scale” the demands against the database, so it’s important that the TempDB is stored on a fast storage medium (a fast SSD drive for example).  The best solution is a more disciplined ordering of object access and this is usually implemented with a Unit Of Work pattern, but Bart tells us that this is difficult to achieve with SQL Server.

Finally, Bart tells us all about scenario number four.  The fundamental problem with this scenario was networking, and more specifically it was down to network latency that was killing the application’s performance.  The application architecture here was not a problem as the application was using Virtual Machines running on VMWare’s vSphere with lots and lots of CPU and Memory to spare.  The SQL Server was running on bare metal to ensure performance of the database layer.  Bart noticed that the problem manifested itself when certain queries were run.  Most of the time, the query would complete in less than 100ms, but occasionally spikes of 500-600ms could be seen when running the exact same query.  To diagnose this issue, Bart used WireShark on both ends of the network, that is to say on the application server where the query originated and on the database server where the data was stored, however, as soon as Wireshark was attached to the network, the performance problem disappeared!

This ultimately turned out to be an incorrect setting on the virtual NIC as Bart could see the the SQL Server was sending results back to the client in only 1ms, however, it was a full 500ms to receive the results when measured from the client (application) side of the network link.  It was disabling the “receive side coalescing” setting that fixed the problem.  Wireshark itself temporarily disables this setting, hence the problem disappearing when Wireshark was attached.

IMG_20161001_152003Bart finally tells us that whilst he’s mostly a server-side performance guy, he’s made some general observations about dealing with client-side performance problems.  These are generally down to size of payload, chattiness of the client-side code, garbage collection in JavaScript code and the execution speed of JavaScript code.  He also reminds us that most performance problems in database-driven applications are usually found at the database layer, and can often be fixed with simple things like adding more relevant indexes, adding stored procedures and utilising efficient cached execution plans.

After Bart’s session, it was time for a final refreshment break before the final session of the day.  For me, the final session was Gary McClean Hall’s “DDD: the God That Failed

Gary starts his session by acknowledging that the title is a little clickbait-ish as his talk started life as a blog post he had previously written.  His talk is all about Domain Driven Design (DDD) and how he implemented DDD when he was working within the games industry.  Gary mentions that he’s the author of the book, “Adaptive Code via C#” and that when we he was working in the game industry, he had worked on the Championship Manager 2008 game.

Gary’s usage of DDD in game development started when there was a split between two companies involved in the Championship Manager series of games.  In the fall out of the split, one company kept the rights to the name, and the other company kept the codebase!  Gary was with the company that had the name but no code and they needed to re-create the game, which had previously been many years in development, in a very compressed timescale of only 12 months.

IMG_20161001_155048Gary starts with a definition of DDD.   It is modelling for complicated domains.  Gary is keen to stress the word “complicated”.  Therefore, we need to be able to identify what exactly is a complicated domain.  In order to help with this, it’s often best to create a “DDD Maturity Model” for the domain in which we’re working.  This is a series of topics which can be further expanded upon with the specifics for that topic (if any) within out domain.  The topics are:

The Domain
Domain Entity Behaviour
Decoupled Domain
Aggregate Roots
Domain Events
CQRS
Bounded Contexts
Polyglotism

By examining the topics in the list above and determining the details for those topics within our own domain, we can evaluate our domain and it’s relative complexity and thus its suitability to be modelled using DDD.

IMG_20161001_155454Gary continues by showing us a typical structure of a Visual Studio solution that purports to follow the Domain Driven Design pattern.  He states that he sees many such solutions configured this way, but it’s not really DDD and usually represent a very anaemic domain.  Anaemic domain models are collections of classes that are usually nothing more than properties with getters and setters, but little to no behaviour.  This type of model is considered an anti-pattern as they offer very low cohesion and high coupling.

If you’re working with such a domain model, you can start to fix things.  Looking for areas of the domain that can benefit from better types rather than using primitive types is a good start.  A classic example of this is a class to represent money.  Having a “money” class allows better control over the scale of the values you’re dealing with and can also encompass currency information as well.  This is preferable to simply passing values around the domain as decimals or ints.

Commonly, in the type of anaemic domain model as detailed above, there are usually repositories associated with entity models within the domain, and it’s usually a single repository per entity model.  This is also considered an anti-pattern as most entities within the domain will be heavily related and thus should be persisted together in the same transaction.  Of course, the persistence of the entity data should be abstracted from the domain model itself.

Gary then touches upon an interested subject, which is the decoupling within a DDD solution.  Our ASP.NET views have ViewModels, our domain has it’s Domain Models and the persistence (data) layer has it’s own data models.  One frequent piece of code plumbing that’s required here is extensive mapping between the various models throughout the layers of the application.  In this regard, Gary suggests reading Mark Seemann’s article, “Is layering worth the mapping?”  In this article, Mark suggests that the best way to avoid having to perform extensive mapping is to move less data around between the layers of our application.  This can sometimes be accomplished, but depending upon the nature of the application, this can be difficult to achieve.

IMG_20161001_160741_1So, looking back at the “repository-per-entity” model again, we’re reminded that it’s usually the wrong approach.  In order to determine the repositories of our domain, we need to examine the domain’s “Aggregate Roots”.  A aggregate root is the top-level object that “contains” additional other child objects within the domain.  So, for example, a class representing a Customer could be an aggregate root.  Here, the customer would have zero, one or more Order classes as children, and each Order class could have one or more OrderItems as children, with each OrderItem linking out to a Product class.  It’s also possible that the Product class could be considered an aggregate root of the domain too, as the product could be the “root” object that is retrieved within the domain, and the various order items across multiple orders for many different customers  could be retrieved as part of the product’s object graph.

To help determine the aggregate roots within our domain, we first need to examine and determine the bounded contexts.  A bounded context is a conceptually related set of objects within the domain that will work together and make sense for some of the domain’s behaviours.  For example, the customer, order, orderitem and product classes above could be considered part of a “Sales” context within the domain.  It’s important to note that a single domain entity can exist in more than one bounded context, and it’s frequently the case that the actually objects within code that represent that domain entity can be entirely different objects and classes from one bounded context to the next.  For example, within the Sales bounded context, it’s possible that only a small subset of the product data is required, therefore the Product class within the Sales bounded context has a lot less properties/data than the Product class in a different bounded context – for example, there could be a “Catalogue” context, with the Product entity as it’s aggregate root, but this Product object is different from the previous one and contains significantly more properties/data.

IMG_20161001_161509The number of different bounded contexts you have within your domain determines the domain’s breadth.  The size of the bounded contexts (i.e. the number of related objects within it) determines the domains depth.  The size of a given bounded context’s depth determines the importance of that area of the domain to the user of the application.

Bounded contexts and the aggregate roots within them will need to communicate with one another in order that behaviour within the domain can be implemented.  It’s important to ensure that aggregate roots and especially bounded contexts are not coupled to each other, so communication is performed using domain events.  Domain events are an event that is raised by one aggregate root or bounded context’s entity that is broadcast to the rest of the domain.  Other entities within other bounded contexts or aggregate roots will subscribe to the domain events that they may be interested in, in order for them to respond to actions and behaviour in other areas of the domain.  Domain events in a .NET application are frequently modelled using the built-in events and delegates functionality of the .NET framework, although there are other options available such as the Reactive Extensions library as well as specific patterns of implementation.

IMG_20161001_161830

One difficult area of most applications, and somewhere where the “pure” DDD model may break down slightly is search.  Many different applications will require the ability to search across data within the domain, and frequently search is seen as a cross-cutting concern as the result data returned can be small amounts of data from many different aggregates and bounded contexts in one amalgamated data set.  One approach that can be used to mitigate this is the CQRS – Command and Query Responsibility Segregation pattern.

Essentially, this pattern states that the models and code that we use to read data does not necessarily have to be the same models and code that we use to write data.  In fact, most of the time, these models and code should be different.  In the case of requiring a search across disparate data within the DDD-modelled domain, it’s absolutely fine to forego the strict DDD model and to create a specific “view” – this could be a database stored procedure or a database view – that retrieves the exact cross-cutting data that you need.  Doing this prevents using the DDD model to specifically create and hydrate entire aggregate roots of object graphs (possibly across multiple different bounded contexts) as this is something that could be a very expensive operation as most of the retrieved data wouldn’t be required.

Gary reminds us that DDD aggregates can still be painful when using a relational database as the persistence storage due to the impedance mismatch of the domain models in code and the tables within the database.  It’s worth examining Document databases or Graph databases as the persistent storage as these can often be a better choice. 

Finally, we learn that DDD is frequently not justified in applications that are largely CRUD based or for applications that make very extensive use of data queries and reports (especially with custom result sets).  Therefore, DDD is mostly appropriate for those applications that have to model a genuinely complex domain with specific and complex domain objects and behaviours and where a DDD approach can deliver real value.

IMG_20161001_165949After Gary’s session was over, it was time for all of the attendees to gather in the largest of the conference rooms for the final wrap-up.  There were only a few prize give-aways on this occasion, and after those were awarded to the lucky attendees who had their feedback forms drawn at random, it was time to thank the great sponsors of the event, without whom there simply wouldn’t be a DDD North.

I’d had a great time at yet another fantastic DDD event, and am already looking forward to the next one!

DDD North 2015 In Review

IMG_20151024_082240

On Saturday 24th October 2015, DDD North held its 5th annual Developer Developer Developer event.  This time the event was held in the North-East, at the University of Sunderland.

As is customary for me now, I had arrived the evening before the event and stayed with family in nearby Newcastle-Upon-Tyne.  This allowed me to get to the University of Sunderland bright and early for registration on the morning of the event.

IMG_20151024_083559 After checking in and receiving my badge, I proceeded to the most important area of the communal reception area, the tea and coffee urns!  After grabbing a cup of coffee and waiting patiently whilst further attendees arrived, there was soon a shout that breakfast was ready.  Once again, DDD North and the University of Sunderland provided us all with a lovely breakfast baguette, with a choice of either bacon or sausage.  

After enjoying my bacon baguette and washing it down with a second cup of coffee, it was soon time for the first session of the day. The first session slot was a tricky one, as all of the five tracks of sessions appealed to me, however, I could only attend one session, so decided somewhat at the last minute it would be Rik Hepworth’s The ART of Modern Azure Deployments.

IMG_20151024_093119 The main thrust of Rik’s session is to explain Azure Resource Templates (ART).  Rik says he’s going to explain the What, the Why and the How of ART’s.  Rik first reminds us that every resource in Azure (from virtual networks, to storage accounts, to complete virtual machines) is handled by the Azure Resource Manager.  This manager can be used and made to perform the creation of resources in an ad-hoc manner using numerous fairly arcane PowerShell commands, however, for repeatability in creating entire environments of Azure resources, we need Azure Resource Templates.

Rik first explains the What of ART’s.  They’re quite simply a JSON format document that conforms to the required ART schema.  They can be split into multiple files, one which supplies the “questions” (i.e. the template of the required resource – say a virtual network) and the other file can supply the “answers” to fill-in-the-blanks of the question file. (i.e. the parameterized IP address range of the required virtual network).  They are idempotent too, which means that the templates can be run against the Azure Resource Manager multiple times without fear of creating more resources than are required or destroying resources that already exist.

Rik proceeds with the Why of ART’s.   Well, firstly since they’re just JSON documents and text files, they can be version controlled.  This fits in very nicely with the “DevOps” culture of “configuration as code”, managed and controlled in the same way as our application source code is.  And being JSON documents, they’re much easier to write, use and maintain than large and cumbersome PowerShell scripts composed of many PowerShell commands with difficult to remember parameters.  Furthermore, Rik tells us that, eventually, Azure Resource Templates will be the only way to manage and configure complete environments of resources within Azure in the future.

Finally, we talk about the How of ART’s.  Well, they can be composed with Visual Studio 2013/2015. The only other tooling required is the Azure SDK and PowerShell.  Rik does mentions some caveats here as the Azure Resource API – against which the ART’s run – is currently moving and changing at a very fast pace.  As a result of this, there’s frequent updates to both the Azure SDK and the version of PowerShell needed to support the latest Azure Resource API version.  It’s important to ensure you keep this tooling up-to-date and in sync in order to have it all work correctly.

Rik goes on to talk about how monitoring of running the resource templates has improved vastly.  We can now monitor the progress of a running (or previously run) template file from portal.azure.com and resource.azure.com, which is the Resource Manager in the Azure portal.  This shows the complete JSON of the final templates, which may have consisted of a number of “question” and “answer” files that are subsequently merged together to form the final file of configuration data.  From here, we can also inspect each of the individual various resources that have been created as part of running the template, for example, virtual machines etc.

Rik then mentions something called DSC.  This is Desired State Configuration.   This is now an engineering requirement for all MS products that will be cloud-based.  DSC effectively means that the “product” can be entirely configured by declarative things such as scripts, command line commands and parameters. etc.  Everything can be set and configured from here without needing to resort to any GUI.

IMG_20151024_095414 Rik talks about how to start creating your own templates.  He says the best place to start is probably the Azure Quickstart Templates that are available from a GitHub repository.  They contain both very simple templates to ease you into getting started with something simple, but also contain some quite complex templates which will help you should you need to create a template to deploy a complete environment of numerous resources.  Rik also mentions that next year will see the release of something called the “Azure Stack” which will make it even easier to create scripts and templates that will automate the creation and management of your entire IT infrastructure, both in the cloud and on-premise, too.

As well as supporting basic parameterization of values within an Azure Resource Template, you can also define entire sections of JSON that define a complete resource (i.e. an entire virtual machine complete with an instance of SQL Server running on it).  This JSON document can then be referenced from within other ART files, allowing individual resources to be scripted once and reused many times.  As part of this, Azure resources support many different types of extensions for extending state configuration into other products.  For example, there is an extension that allows an Azure VM to be created with an Octopus Deploy tentacle pre-installed, as well as an extension that allows a Chef client to be pre-installed on the VM, for example.

Rik shows us a sample layout of a basic Azure Resource Template project within Visual Studio.  It consists of 3 folders, Scripts, Templates and Tools.  There's a blank template in the template folder and this defines the basic "shape" of the template document.  To get started within a simple template for example, a Windows VM needs a Storage account (which can be an existing one, or can create new) and a Virtual Network before the VM can be created.

We can use the GUI tooling within Visual Studio to create the basic JSON document with the correct properties, but can then manually tweak the values required in order to script our resource creation.  This is often the best way to get started.  Once we’ve used the GUI tooling to generate the basics of the template, we can then remove duplication by "collapsing" lots of the properties and extracting them into separate files to be included within the main template script.  (i.e. deploy location is repeated for each and every VM.  If we’re deploying multiple VMs, we can remove this duplication by extracting into a separate file that is referenced by each VM).

One thing to remember when running and deploying ART’s, Rik warns us, is that the default lifetime of an Azure Access Token is only 1 hour.  Azure Access Tokens are required by the template in order to prove authorisation for creating Azure resources.  However, in the event that the ART is deploying a complete environment consisting of numerous resources, this can be a time-consuming process – often taking a few hours.  For this reason, it’s best to extend the lifetime of the Azure Access Tokens, at least during development of the templates, otherwise the tokens will expire during the running of the template, thereby making the resource creation fail.

Rik wraps us with a summary, and opens the floor to any questions.  One question that is posed is whether existing Azure Resources can be “reverse-engineered” to ART scripts.  Rik states that so long the existing resources are v2 resources (that have been created with Azure Resource Manager) then you can turn these resources into templates BUT, if existing resources are V1 (also known as Classic resources and created using the older Azure Service Management) they can't be reverse-engineered into templates.

IMG_20151024_105058 After a short coffee break back in the main communal area, it’s time for the second session of the day.  For this session, I decided to go with Gary Short’s Deep Dive into Deep Learning.

Gary’s session was all about the field of data science and of things like neural networks and deep learning.   Gary starts by asking who knows what Neural Networks are and asks what Deep Learning is and the difference between them.  Not very many people know the difference here, but Gary assures us that by the end of his talk, we all will.

Gary tells us that his talk’s agenda is about looking at Neural Networks, being the first real mechanism by which “deep learning” was first implemented, but how today’s “deep learning” has improved on the early Neural Networks.  We first learn that the phrase “deep learning” is itself far too broad to really mean anything!

So, what is a Neural Network?  It’s a “thing” in data science.  It’s a statistical learning model and can be used to estimate functions that can depend on a large number of inputs.  Well, that’s a rather dry explanation so Gary gives us an example.  The correlation between temperature over the summer months and ice cream sales over the summer months.  We could use a Neural Network to predict the ice cream sales based upon the temperature variance.  This is, of course, a very simplistic example and we can simply guess ourselves that as the temperature rises, ice cream sales would predictably rise too.  It’s a simplistic example as there’s exactly one input and exactly one output, so it’s very easy for us to reason about the outcome with really relying upon a Neural Network.  However, in a more realistic example using “big data”, we’d likely have hundreds if not many thousands of inputs for which we wish to find a meaningful output.

Gary proceeds to explain that a Neural Network is really a weighted directed graph.  This is a graph of nodes and the connections between those nodes.  The connections are in a specific direction, from one node to another, and that same node can have a connection back to the originating node.  Each connection has a “weight” or a probability.  In the diagram to the left we can see that node A has a connection to node E and also a separate connection to node F.  The “weight” of the connection to node F is 0.9 whilst the weight of the connection to node E is 0.1.  This means there a 10% chance that a message or data coming from node A will be directed to node E and a 90% chance that a message coming from node A will be directed to node F.  The overall combination of nodes and connections between the nodes overall gives us the Neural Network.

Gary tells us how Neural Networks are not new, they were invented in 1943 by two mathematicians, Warren McCulloch and Walter Pitts.  Back then, they weren’t referred to as Neural Networks, but were known as “Threshold Logic”.  Later on, in the late 1940's, Donald Hebb created a hypothesis based on "neural plasticity" which is the ability of a Neural Network to be able to “heal itself” around “injuries” or bad connections between nodes.  This is now known as Hebbian Learning.  In 1958, mathematicians Farley and Wesley A. Clark used a calculator to simulate a Hebbian Machine at MIT (Massachusetts Institute of Technology).

So, just how did today’s “Deep Learning” improve upon Neural Networks.  Well, Neural Networks originally had two key limitations.  Firstly, they couldn't process exclusive or (XOR) logic in a single-layer network, and secondly, computers (or rather calculators) simply weren't really powerful enough to perform the extensive processing required.  Eventually, in 1975, a mathematician named Werbos discovered something called “back propagation”, which is the backwards propagation of error states allowing originating nodes to learn of errors further down a processing chain and perform corrective measures (self-learning) to mitigate further errors.  Back propagation helped to solve the XOR problem.  It was only through the passage of a large amount of time, though, that yesterday’s calculators became today’s computers – which got ever more powerful with every passing year – and allowed Neural Networks to come into their own.  So, although people in academia were teaching the concepts of Neural Networks, they weren’t really being used, preferring instead alternative learning mechanisms like “Support Vector Machines” (SVM) which could work with the level of computing that was available at that time.  With the advent of more powerful computers, however, Neural Networks really started to take off after the year 2000.

So, as Neural Networks started to get used, another limitation was found with them.  It took a long time to “train” the model with the input data.  Gary tells us of a Neural Network in the USA, used by the USPS (United States Postal Service) that was designed to help recognise hand-written zip codes.  Whilst this model was effective at it’s job, it took 3 full days to train the model with input data!  This had to be repeated continually as new “styles” of hand-writing needed to be recognised by the Neural Network.

Gary continues by telling us that by the year 2006, the phrase “deep learning” had started to take off, and this arose out of the work of two mathematicians called Geoffrey Hinton and Ruslan Salakhutdinov and showed that many-layered, feed-forward Neural Networks could be trained far more effectively, thus reducing the time required to train the network.  So really, “deep learning” is really just modern day Neural Networks, but ones that have been vastly improved over the original inventions of the 1940’s. 

Gary talks about generative models and stochastic models.   Generative models will “generate” things in a random way, whilst stochastic model will generate things in an unpredictable way. Very often this is the very same thing.  It’s this random unpredictability that exists in the problem of voice recognition.  This is now a largely “solved” problem from around 2010.  It’s given rise to Apple’s Siri, Google’s Google Now and most recently, and apparently most advanced, Microsoft’s Cortana.

At this point, Gary shows us a demo of some code that will categorise Iris plants based upon a diverse dataset of a number of different criteria.  The demo is implemented using the F# language, however, Gary states that the "go to" language for Data Science is R.  Gary says that whilst it’s powerful, it not a very nice language and this is primarily put down to the fact that whilst languages like C, C#, F# etc. are designed by computer scientists, R is designed by mathematicians.  Gary’s demo can use F# as it has a “type provider” mechanism which allows it to “wrap” and therefore largely abstract away the R language.  This can be downloaded from NuGet and you’ll also need the FsLab NuGet package.

IMG_20151024_113640 Gary explains that the categorisation of Irises is the canonical example for data science categorisation.  He shows the raw data and how the untrained system initially thinks that there are three classifications of irises when we know there's only really two.  Gary then explains that, in order to train our Neural Network to better understand our data, we need to start by "predicting the past".  This is simply what is says, for example, by looking at the past results of (say) football matches, we can use that data to to help predict future results.

Gary continues and shows how after "predicting the past" and using the resulting data to then train the Neural Network, we can once again examine the original data.  The graph this time is correctly showing only two different categorisations of irises.  Looking closer at the results and we can see that of a data set that contains numerous metrics for 45 different iris plants, our Neural Network was able to correctly classify 43 out of the 45 irises, with only two failures.  Looking into the specific failures, we see that they were unable to be classified due to their data being very close between the two different classifications.  Gary says how we could probably “fine tune” our Neural Network by looking further info the data and could well eradicate the two classification failures.

IMG_20151024_104709 After Gary’s session, it’s time for another tea and coffee break in the communal area, after which it’s time for the 3rd and final session before lunch.  There had been a couple of last-minute cancellations of a couple of sessions due to speaker ill health, and one of those sessions was unfortunately the one I had wanted to attend in this particular time slot, Stephen Turner’s “Be Reactive, Think Reactive”.  This session was rescheduled with Robert Hogg delivering a presentation on Enterprise IoT (Internet of Things), however, the session I decided to attend, was Peter Shaw’s Microservice Architecture, What It Is, Why It Matters And How To Implement It In .NET.

Peter starts his presentation with a look at the talk’s agenda.   He’s going to define what Microservices are and their benefits and drawbacks.  He’ll explain how, within the .NET world, OWIN and Katana help us with building Microservices, and finally he is going to show a demo of some code that uses OWIN running on top of IIS7 to implement a Microservice.

IMG_20151024_120837 Firstly, Peter tells us that Microservices are not a software design pattern, they’re an architectural pattern.  They represent a 100-foot view of your entire application, not the 10-foot view, and moreover, Microservices provide a set of guidelines for deployment of your project.

Peter then talks about monolithic codebases and how we scale them by duplicating entire systems.  This can be wasteful when you only need to scale up one particular module as you’ll end up duplicating far more than you need.  Microservices is about being able to scale only what you need, but you need to find the right balance of how much to break down the application into it’s constituent modules or discreet chunks of functionality.  Break it down too much and  you'll get nano-services – a common anti-pattern - and will then have far too much complexity in managing too many small parts.  Break it down too little, and you’re not left with Microservices.  You’ve still got a largely monolithic application, albeit a slightly smaller one.

Next, Peter talks about how Microservices communicate to each other.  He states how there’s two schools of thought to approaching the communication problem.  One school of thought is to use an ESB (Enterprise Service Bus).  The benefits of using an ESB are that it’s a robust communications channel for all of the Microservices, however, a drawback is that it’s also a single point of failure.  The other school of thought is to use simple RESTful/HTTP communications directly between the various Microservices.  This removes the single point of failure, but does add the overhead of requiring the ability of each service to be able to “discover” other services (their availability and location for example) on the network.  This usually involves an additional tool, something like Consul, for example.

Some of the benefits of adopting a Microservices architecture are that software development teams can be formed around each individual service.  These would be full teams with developers, project managers etc. rather than having specific technical silos within one large team.  Other benefits are that applications become far more flexible and modular and can be composed and changed easily by simply swopping out one Microservice for another.

Some of the drawbacks of Microservices are that they have a potentially higher maintenance cost as your application will often be deployed across different and more expansive platforms/servers.  Other drawbacks are the potential for “data islands” to form.  This is where your application’s data becomes disjointed and more distributed due to the nature of the architecture.  Furthermore, Microservices, if they are to be successful, will require extensive monitoring.  Monitoring of every available metric of the applications and the communications between them is essential to enable effective support of the application as a whole.

After this, Peter moves on to show us some demo code, built using OWIN and NancyFX.  OWIN is the Open Web Interface for .NET and is an open framework for decoupling .NET web applications from the underlying web server that powers the application.  Peter tells us that Microsoft’s own implementation of the OWIN standard is called KatanaNancyFX is a lightweight web framework for .NET, and is built on top of the OWIN standard, thus decoupling the Nancy code from the underlying web server (i.e. there’s no direct references to HttpContext or other such objects).

Peter shows us how simple some of Nancy’s code can be:

public dynamic Something(){
    var result = GetSomeData();
    return result==null ? 404 : Result.AsJson();    
}

The last line of the code is most interesting.   Since the method returns a dynamic type, returning an integer that has the same value as a HTTP Status Code will be inferred by the Nancy framework to actually return that status code from the method!

Peter shows us some more code, most of which is very simple and tells us that the complete demo example is available to download from GitHub.

IMG_20151024_130929 After Peter’s talk wrapped up, it was time for lunch.  Lunch at the DDD events is usually a “brown bag” affair with a sandwich, crisps, some fruit and/or chocolate.  The catering at DDD North, and especially at the University of Sunderland, is always excellent and this year was no exception.   There was a large selection of various combinations of crisp flavours, chocolate bars and fruit along with a large selection of very nice sandwiches, including some of the more “basic” sandwich fillings for fusspots like me!  I don’t like mayonnaise, so pre-packed sandwiches are usually a tricky proposition.  This year, though, they had “plain” cheese and ham sandwiches with no additional condiments, which was excellent for me.

The excellent food was accompanied by a drink. I opted for water.  After collecting my lunch, I went off to find somewhere to sit and eat as well as somewhere that would be fairly close to a power point as I needed to charge my laptop.

IMG_20151024_131246 I duly found a spot that allowed me to both eat my lunch, charge my laptop and look out of the window onto the River Wear at what was a very nice day outside in sunny Sunderland!

IMG_20151024_131609 After fairly quickly eating my lunch, it was time for some lunchtime Grok Talks.  These are the 15-minute, usually fairly informal talks that often take place over lunch hour at many of these type of conferences and especially at DDD conferences.  During the last few DDD’s that I’d attended, I’d missed most of the Grok Talks for various reasons, but today, having already consumed my delicious lunch, I decided that I’d try to take them in.

By the time I’d reached the auditorium for the Grok Talks, I’d missed the first few minutes of the talk by Jeff Johnson all about Microsoft Azure and the role of Cloud Solution Architect at Microsoft.

Jeff first describes what Azure is, and explains that it’s Microsoft’s cloud platform offering numerous services and resources to individuals and companies of all sizes to be able to host their entire IT infrastructure – should they so choose – in the cloud. 

IMG_20151024_133830 Next, Jeff shows us some impressive statistics on how Azure has grown in only a few short years.  He says that the biggest problem that Microsoft faces with Azure right now is that they simply can’t scale their infrastructure quick enough to keep up with demand.  And it’s a demand that is continuing to grow at a very fast rate.  He says that Microsoft’s budget on expenditure for expanding and growing Azure is around 5-6 billion dollars per annum, and that Azure has a very large number of users even today.

Jeff proceeds by talking about the role of Cloud Solutions Architect within Microsoft.  He explains that the role involves working very closely with customers, or more accurately potential customers to help find projects within the customers’ inventory that can be migrated to the cloud for either increased scalability, general improvement of the application, or to make the application more cost effective.  Customers are not charged for the services of a Cloud Solutions Architect, and the Cloud Solutions Architects themselves seek out and identify potential customers to see if they can be brought onboard with Azure.

Finally, Jeff talks about life at Microsoft.  He states how Microsoft in the UK has a number of “hubs”, one each in Edinburgh, Manchester and London, but that Microsoft UK employees can live anywhere.  They’ll use the “hub” only occasionally, and will often work remotely, either from home or from a customer’s site.

After Jeff’s talk, we had Peter Bull and his In The Groove talk all about developing for Microsoft’s Groove Music.  Peter explains that Groove Music is Microsoft’s equivalent to Apple’s iTunes and Google’s Google Play Music and was formerly called Xbox Music.  Peter states that Groove Music is very amenable to allowing developers to create new applications using Groove Music as it offers both an API and an SDK.  The SDK is effectively a wrapper around the raw API.  Peter then shows us a quick demo of some of the nice touches to the API which includes the retrieval of album artwork.  The API allows retrieving album artwork of varying sizes and he shows us how, when requesting a small version of some album artwork that, for example, contains a face, the Groove API will use face detection algorithms to ensure that when dynamically resizing the artwork, the face remains visible and is not cropped out of the picture.

IMG_20151024_140031 The next Grok talk was by John Stovin and was all about a unit testing framework called Fixie.  John starts by asking, Why another unit testing framework?  He explains that Fixie is quite different from other unit testing frameworks such as NUnit or xUnit.  The creator of Fixie, Patrick Lioi, stated that he created Fixie as he wanted as much flexibility in his unit testing framework as he had with other frameworks he was using in his projects.  To this end, Fixie does not ship with any assertion framework, unlike NUnit and xUnit, allowing each Fixie user to choose his or her own Assertion framework.  Fixie is also very simple in how you author tests.   There’s no [Test] style attributes, no using Fixie statements at the top of test classes.   Each test class is simply a standard public class and each test method is simply a public method whos name ends in “Test”.  Test setup and teardown is similar to xUnit in that it simply uses the class constructor and Dispose methods to perform these functions.

IMG_20151024_140406 Interestingly, Fixie tests can inherit from a “Convention” base class which can change the behaviour of Fixie.  For example, a custom convention class can be implemented very simply to alter the behaviour of Fixie to be more like that of NUnit, with test classes decorated by a [TestFixture] attribute and test methods decorated by a [Test] attribute.  Conventions can control the discovery of tests, how tests are parameterized, how tests are executed and also how test output is displayed.

Fixie currently has lots of existing test-runners, including a command-line runner and a runner for the Visual Studio test explorer.  There’s currently a plug-in to allow ReSharper 8 to run Fixie tests, and a new plug-in/extension is currently being developed to work with ReSharper 10.  Fixie is open-source and is available on GitHub.

After John’s talk, we had the final Grok Talk of the lunch time, which was Steve Higgs’s ES6 Right Here, Right Now.  Steve’s talk is how developers can best use and leverage ES6 (ECMAScript 6 aka JavaScript 2015) today.  Steve starts by stating that, contrary to some beliefs, ES6 is no longer the “next” version of JavaScript, but is actually the “current” version.  The standard has been completely ratified, but most browsers don’t yet fully support it.

Steve talks about some of the nice features of ES6, many of which had to be implemented with 3rd-party libraries and frameworks.  ES6 has “modules” baked right in, so there’s no longer any need to use a 3rd-party module manager.  However, if we’re targeting today's browsers and writing JavaScript targeting ES5, we can use 3rd-party libraries to emulate these new ES6 features (for example, require.js for module management).

Steve continues by stating that ES6 will now (finally) have built-in classes.  Unfortunately, they’re not “full-featured” classes like we get in many other languages (such as C#, Java etc.) as they only support constructors and public methods, and have no support for things like private methods yet.  Steve does state that private methods can be “faked” in a bit of a nasty, hacky way, but ES6 classes definitely do not have support for private variables.  Steve states that this will come in the future, in ES7.

ES6 gets “arrow functions”, which are effectively lambda functions that we know and love from C#/LINQ, for example:

var a = [
  "Hydrogen",
  "Helium",
  "Lithium",
  "Beryl­lium"
];

// Old method to return length of each element.
var a2 = a.map(function(s){ return s.length });

// New method using the new "arrow functions".
var a3 = a.map( s => s.length );

Steve continues by stating that ES6 introduces the let and const keywords.  let gives a variable block scoping rather than JavaScript’s default function scoping.  This is a welcome addition, and helps those of us who are used to working with languages such as C# etc. where our variable scoping is always block scoped.  const allows JavaScript to declare a constant.

ES6 now also has default parameters which allow us to define a default value for a function’s parameter in the event that code calling the function does not supply a value:

function doAlert(a=1) {
    alert(a);
}

// Calling doAlert without passing a value will use the
// default value supplied in the function definition.
doAlert();

Steve also mentions how ES6 now has string interpolation, also known a “template strings”, so that we can finally write code such as this:

// Old way of outputting a variable in a string.
var a = 5;
var b = 10;
console.log("Fifteen is " + (a + b) + " and\nnot " + (2 * a + b) + ".");

// New ES6 way with string interpolation, or "template strings".
var a = 5;
var b = 10;
console.log(`Fifteen is ${a + b} and\nnot ${2 * a + b}.`);

One important point to note with string interpolation is that your string must be quoted using backticks (`) rather than the normal single-quote (‘) or double-quote (“) characters!  This is something that will likely catch a lot of people out when first using this new feature.

Steve rounds off his talk by stating that there’s lots of other features in ES6, and it’s best to simply browse through them all on one of the many sites that detail them.  Steve says that we can get started with ES6 today by using something like babeljs.io, which is a JavaScript compiler (or transpiler) and allows you to transpile JavaScript code that targets ES6 into JavaScript code that is compatible with the ES5 that is fully supported by today’s browsers.

After Steve’s talk, the Grok Talks were over, and with it the lunch break was almost over too.  There was a few minutes left to head back to the communal area and grab a cup of coffee and a bottle of water to keep me going through the two afternoon sessions and the two final sessions of the day.

IMG_20151024_143656 The first session of the afternoon was another change to the advertised session due to the previously mentioned cancellations.  This session was Pete Smith’s Beyond Responsive Design.  Pete’s session was aimed at design for modern web and mobile applications.  Pete starts with looking at a brief history of web development.  He says that the web started solely on the desktop and was very basic at first, but very quickly grew to become better and better.  Eventually, the Smartphone came along and all of these good looking desktop websites suddenly didn’t look so good anymore.

So then, Responsive Design came along.  This attempted to address the disconnect and inconsistencies between the designs required for the desktop and designs required for the mobile.  However, Responsive Design brought with it it’s own problems.  Our designs became awash with extensive media queries in order to determine which screen size we were rendering for, as well as became dependent upon homogenous (and often large) frameworks such as Zurb’s Foundation and Bootstrap.  Pete says that this is the focus of going “beyond” responsive design.  We can solve these problems by going back to basics and simplifying what we do.

So, how do we know if we've got a problem?  Well, Pete explains that there are some sites that work great on both desktop and mobile, but overall, they’re not as widespread as we would like given where we are in our web evolution.  Pete then shows some of the issues.  Firstly, we have what Pete calls the "teeny tiny" problem.  This is  where the entire desktop site is scaled and shrunk down to display on the smaller mobile screen size.  Then there's another problem that Pete calls "Indiana’s phone and the temple of zoom" which is where a desktop site, rendered on a mobile screen, can be zoomed in continuously until it becomes completely unusable.

Pete asks “what is a page on today’s modern web?”  Well, he says there’s no such thing as a single-page application.  There’s really no difference between SPA’s and non-SPA sites that use some JavaScript to perform AJAX requests to retrieve data from the server.  Pete states that there’s really no good guiding design principles.  When we’re writing apps for Android or iOS, there’s a wealth of design principles that developers are expected to follow, and it’s very easy for them to do so.  A shining example of this is Google’s Material Design.  When we’re designed for the web, though, not so much.

Dynamic-Data-Maksing-IbizaSo how do we improve?  Pete says we need to “design from the ground up”.  We need to select user-interface patterns that work well on both the desktop and on mobile.  Pete gives examples and states that UI elements like modal pop-ups and alerts work great on the desktop, but often not so well on mobile.  An example of a UI pattern that does work very well on both platforms are the “panes” (sometimes referred to as property sheets) that slide in from the side of the screen.  We see this used extensively on mobile due to the limited screen real estate, but not so much on the desktop, despite the pattern working well here also.  A great example of effective use of this design pattern is the new Microsoft Azure Preview Portal.  Pete states we should avoid using frameworks like Bootstrap or Foundation.  We should do it all ourselves and we should only revert to “responsive design” when there is a specific pattern that clearly works better on one medium than another and where no other pattern exists that works well on all mediums.

At this point in the talk, Pete moves on to show us some demo code for a website that he’s built to show off the very design patterns and features that he’s been discussing.  The code is freely available from Pete’s GitHub repository.  Pete shows his website first running on a desktop browser, then he shows the same website running on an iPad and then on a Smartphone.  Each time, due to clever use of design patterns that work well across screens of differing form factors, the website looks and feels very similar.  Obviously there are some differences, but overall, the site is very consistent.

Pete shows the code for the site and examines the CSS/LESS styles.  He says that absolute positioning for creating these kind of sites is essential.  It allows us to ensure that certain page elements (i.e.. the left-hand menu bar) are always displayed correctly and in their entirety.  He then shows how he's used CSS3 transforms to implement the slide in/out panels or “property sheets”, simply transforming them with either +100% or -100% of their horizontal positioning to display to the left or right of the element’s original, absolute position.  Pete notes how there’s extensive use of HTML5 semantic tags, such as <nav> <content> <footer> etc.  Pete reminds us that there’s no real behaviour attached to using these tags but that they make things far easier to reason about than simply using <div> tags for everything.

Finally, Pete summarises and says that if there’s only one word to take away from his talk it’s “Simplify”.  He talks about the future and mentions that the next “big thing” to help with building sites that work well across all of the platforms that we use to consume the web is Web Components.  Web Components aid encapsulation and re-usability.  They’re available to use today, however, they’re not yet fully supported.  In fact, they are only currently supported in Chrome and Opera browsers and need a third-party JavaScript library, Polymer.js, in order to work.

IMG_20151024_155657 The final session of the day was Richard Fennell’s Monitoring and Addressing Technical Debt With SonarQube.

Richard starts his session by defining technical debt.  He says it’s something that builds up very slowly, almost sneaks up on you.  It’s the little “cut corners” of our codebases where we’ve implemented code that seems to do the job, but is sub-optimal code.  Richard says that technical debt can grow to become so large that it can stop you in your tracks and prevent you from progressing with a project.

He then discusses the available tools that we currently have to address technical debt, but specifically within the world of Microsoft’s tooling.  Well, firstly we have compiler errors.  These are very easy to fix as we simply can’t ship our software with compiler errors, and they’ll provide immediate feedback to help us fix the problem.  Whilst compiler errors can’t be ignored, Richard says that it’s really not uncommon to come across projects that have many compiler warnings.  Compiler warnings aren’t errors as such, and as they don’t necessarily prevent us from shipping our code, we can often live with them for a long time.  Richard mentions the tools Visual Studio Code Analysis (previously known as FXCop) and StyleCop.  Code Analysis/FxCop works on your compiled code to determine potential problems or maintenance issues with the code, whilst StyleCop works on the raw source code, analysing it for style issues and conformance against a set of coding standards.  Richard says that both of these tools are great, but offer only a simple “snapshot in time” of the state of our source code.  What we really need is a much better “dashboard” to monitor the state of our code.

Richard asks, “So what would Microsoft do?”.  He continues to explain that the “old” Microsoft would go off and create their own solution to the problem, however, the “new” Microsoft, being far more amenable to adopting already-existing open source solutions, has decided to adopt the existing de-facto standard solution for analysing technical debt, a product called SonarQube by SonarSource.

Richard introduces SonarQube and states that, firstly, we must understand that it’s a Java based product.  This brings some interesting “gotchas” to the .NET developers when trying to set up a SonarQube solution as we’ll see shortly.  Richard states that SonarQube’s architecture is based upon it having a backend database to store the results of its analysis, and it also has plug-in analyzers that analyze source code.  Of course, being a Java-based product, SonarQube’s analyzers are written in Java too.  The Analyzers examine our source code and create data that is written into the SonarQube database.  From this data, a web-based front-end part of the SonarQube product can render a nice dashboard of this data in ways that help us to "visualise" our technical debt.   Richard points out that analyzers exist for many different languages and technologies, but he also offers a word of caution.  Not all analyzers are free and open source.  He states that the .NET ones currently are but (for example) the COBOL & C++ analyzers have a cost associated with them.

Richard then talks about getting an installation of SonarQube up and running.  As it’s a Java product, there’s very little in the way of nice wizards during the installation process to help us.  Lots of the configuration of the product is performed via manual editing of configuration files.  Due to this, Microsoft’s ALM Rangers group have produced a very helpful guide to help in installing the product.  The system requirements for installing SonarQube are a server running either Windows or Linux with a minimum of 1GB of RAM.  The server will need to have .NET framework 4.5.2 installed also, as this is required by the MSBuild runner which is used to run the .NET analyzer.  As it’s a Java product, obviously, Java is required to be installed on the server – either Oracle’s JRE 7 (or higher) or OpenJDK 7 (or higher).  For the required backend database SonarQube will, by default, install a database called H2, however this can be changed (and probably should be changed) to something more suited to .NET folks such as Microsoft’s SQL Server.  It’s worth noting that the free SQL Server Express will work just fine also.  Richard points out that there are some “gotchas” around the setup of the database, too.  As a Java-based product, SonarQube will be using JDBC drivers to connect to the database, and these place some restrictions on the database itself.  The database must have it’s collation set to Case Sensitive (CS) and Accent Sensitive (AS).  Without this, it simply won’t work!

After setup of the software, Richard explains that we’ll only get an analyzer and runner for Java source code out-of-the-box.  From here we can download and install the analyzer and runner we’ll need for analyzing C# source code.  He then shows how we need to add a special file called sonar-project.properties to the root of our project that will be analyzed.  This file contains four key values that are required in order for SonarQube to perform it’s analysis.  Ideally, we’d set up our installation of SonarQube on a build server, and there we’d also edit the SonarQube.Analyzers.xml file to reflect the correct database connection string to be used.

image3 Richard now moves onto showing us a demo.  He uses the OWASP demo project, WebGoat.NET for his demonstration.  This is an intentionally “broken” ASP.NET application which will allow SonarQube to highlight multiple technical debt issues with the code.  Richard shows SonarQube being integrated into Visual Studio Team Foundation Server 2015 as part of its build process.  Richard further explains that SonarQube analyzers are based upon processing complete folders or wildcards for file names.  He shows the  default SonarQube dashboard and explains how most of the errors encountered can often be found in the various “standard” libraries that we frequently include in our projects, such as jQuery etc.  As a result of this, it’s best to really think about how we structure our solutions as it’s beneficial to keep third-party libraries in folders separate from our own code.  This way we can instruct SonarQube to ignore those folders.

Richard shows us the rules that exist in SonarQube.  There are a number of built-in rules provided by SonarQube itself, but the C# analyzer plug-in will add many of it’s own.  These built-in SonarQube rules are called the “Sonar Way” rules and represent the expected Sonar way of writing code.  These are very Java-centric so may only be of limited use when analyzing C# code.  The various C# rule-sets are obviously more aligned with the C# language.  Some rules are prefixed with “CA” in the rule-set list and these are the FxCop rules, whilst other rules are prefixed with “S” in the rule-set list.  These are the C# language rules and use Roslyn to perform the code analysis (hence the requirement for the .NET framework 4.5.2 to be installed)

Richard continues by showing how we can set up “quality gates” to show if one of our builds is passing or failing in quality.  This is an analysis of our code by SonarQube as part of a build process.  We can set conditions on the history of the analyses that have been run to ensure that, for example, each successive build should have no more than 98% of known bugs of the previous release.  This way, we can reason that our builds are getting progressively better in quality each time.

Finally, Richard sums up by introducing a new companion product to SonarQube called SonarLint. SonarLint is based upon the same .NET Compiler platform, Roslyn, that provides SonarQube’s analysis, however SonarLint is designed to be run inside the Visual Studio IDE and provides near real-time instant feedback on the quality of our source code as we’re editing it.  SonarLint is open source and available on Github.

IMG_20151024_170210 After Richard’s talk was over, it was time for all of the conference attendees to gather in the main lecture hall for the final wrap-up presentation.  During the presentation, the various sponsors were thanked for all of their support.  The conference organisers did also mention how there had been a number of “no-shows” to the conference (people who’d registered to attend but had not shown up on the day and hadn’t cancelled their tickets despite repeated communication requesting people who can no longer attend to do so).  The organisers told us how every no-show not only costs the conference around £15 per person but also prevents those who were on the waiting list from being able to attend, and there was quite an extensive waiting list for the conference this year.  Here’s hoping that future DDD Conferences have less no-shows.

It was mentioned that DDD North is now the biggest of all of the regional DDD events, with some 450 (approx.) attendees this year – a growth on last year’s numbers – with still over 100 more people on the waiting list.  The organisers told us that they could, if it weren’t for space/venue size considerations, have run the conference for around 600-700 people.  That’s quite some numbers and shows just how popular the DDD conferences, and especially the DDD North conference, are.

2015-11-02 14_35_26-Technical whizzes set to share expertise in Sunderland - Sunderland EchoOne especially nice touch was that I did receive a quick mention from Andy Westgarth, the main organiser of DDD North, during the final wrap-up presentation for the use of one of my pictures for an article that had appeared in the local newspaper, the Sunderland Echo, that very day.  The picture used was one I’d taken in the same lecture hall at DDD North 2013, two years earlier.  The article is available to read online, too.

After the wrapping up came the prize draw.  As always, there was some nice prizes to be given away by both the conference organisers themselves as well as prizes to be given away by the individual sponsors including a Nexus 9 tablet, a Surface Pro 3 and a whole host of other goodies.  As was usual, I didn’t win anything, but I’d had a fantastic day at yet another superb DDD North.  Many thanks to the organisers and the various sponsors for enabling such a brilliant event to happen.

IMG_20151024_175352 But…  It wasn’t over just yet!   There is usually a “Geek Dinner” after the DDD conferences however on this occasion there was to be a food and drink reception graciously hosted by Sunderland Software City.  So as we shuffled out of the Sunderland University campus, I headed to my car to drive the short distance to Sunderland Software City.

IMG_20151024_175438 Upon arrival there was, unfortunately, no pop-up bar from Vaux Brewery as there had been two years prior.  This was a shame as I’m quite partial to a nice pint of real ale, however, the kind folks at Sunderland Software City had provided us with a number of buckets of ice-cold beers, wines and other beverages.  Of course, I was driving so I had to stick to the soft drinks anyway!

I was one of the first people to arrive at the Sunderland Software City venue as I’d driven the short distance from the University to get there, whereas most other people who were attending the reception were walking from the University.  I grabbed myself a can of Diet Coke and quickly got chatting to some fellow conference attendees sharing experiences about our day and getting to know each other and finding out what we do for a living and all about the work we do.

IMG_20151024_180512 Not too long after getting chatting, a few of the staff from the Centre were scurrying towards the door.  What we soon realised was that the “food” element of the “food & drink” reception was arriving.  We were being treated to what must be the single largest amount of pizza I’ve ever seen in one place.  76 delicious pizzas delivered from Pizza Hut!  Check out the photo of this magnificent sight!  (Oh, and those boxes of pizza are stacked two deep, too!)

So, once the pizzas had all been delivered and laid out for us on the extensive table top, we all got stuck in.  A few slices of pizza later and an additional can of Diet Coke to wash it down and it was back to mingling with the crowd for some more networking.

Before leaving, I managed to have a natter with Andy Westgarth, the main conference organiser about the trials and tribulations of running a conference like DDD North.  Despite the fact that Andy should be living and working in the USA by the time the next DDD North conference rolls around, he did assure me that the conference was in very safe hands and should continue on next year.

After some more conversation, it was finally time for me to leave and head off back to my in-laws in Newcastle.  And with that another superb DDD North conference was over.  Here’s looking forward to next year!