DDD North 2015 In Review

IMG_20151024_082240

On Saturday 24th October 2015, DDD North held its 5th annual Developer Developer Developer event.  This time the event was held in the North-East, at the University of Sunderland.

As is customary for me now, I had arrived the evening before the event and stayed with family in nearby Newcastle-Upon-Tyne.  This allowed me to get to the University of Sunderland bright and early for registration on the morning of the event.

IMG_20151024_083559 After checking in and receiving my badge, I proceeded to the most important area of the communal reception area, the tea and coffee urns!  After grabbing a cup of coffee and waiting patiently whilst further attendees arrived, there was soon a shout that breakfast was ready.  Once again, DDD North and the University of Sunderland provided us all with a lovely breakfast baguette, with a choice of either bacon or sausage.  

After enjoying my bacon baguette and washing it down with a second cup of coffee, it was soon time for the first session of the day. The first session slot was a tricky one, as all of the five tracks of sessions appealed to me, however, I could only attend one session, so decided somewhat at the last minute it would be Rik Hepworth’s The ART of Modern Azure Deployments.

IMG_20151024_093119 The main thrust of Rik’s session is to explain Azure Resource Templates (ART).  Rik says he’s going to explain the What, the Why and the How of ART’s.  Rik first reminds us that every resource in Azure (from virtual networks, to storage accounts, to complete virtual machines) is handled by the Azure Resource Manager.  This manager can be used and made to perform the creation of resources in an ad-hoc manner using numerous fairly arcane PowerShell commands, however, for repeatability in creating entire environments of Azure resources, we need Azure Resource Templates.

Rik first explains the What of ART’s.  They’re quite simply a JSON format document that conforms to the required ART schema.  They can be split into multiple files, one which supplies the “questions” (i.e. the template of the required resource – say a virtual network) and the other file can supply the “answers” to fill-in-the-blanks of the question file. (i.e. the parameterized IP address range of the required virtual network).  They are idempotent too, which means that the templates can be run against the Azure Resource Manager multiple times without fear of creating more resources than are required or destroying resources that already exist.

Rik proceeds with the Why of ART’s.   Well, firstly since they’re just JSON documents and text files, they can be version controlled.  This fits in very nicely with the “DevOps” culture of “configuration as code”, managed and controlled in the same way as our application source code is.  And being JSON documents, they’re much easier to write, use and maintain than large and cumbersome PowerShell scripts composed of many PowerShell commands with difficult to remember parameters.  Furthermore, Rik tells us that, eventually, Azure Resource Templates will be the only way to manage and configure complete environments of resources within Azure in the future.

Finally, we talk about the How of ART’s.  Well, they can be composed with Visual Studio 2013/2015. The only other tooling required is the Azure SDK and PowerShell.  Rik does mentions some caveats here as the Azure Resource API – against which the ART’s run – is currently moving and changing at a very fast pace.  As a result of this, there’s frequent updates to both the Azure SDK and the version of PowerShell needed to support the latest Azure Resource API version.  It’s important to ensure you keep this tooling up-to-date and in sync in order to have it all work correctly.

Rik goes on to talk about how monitoring of running the resource templates has improved vastly.  We can now monitor the progress of a running (or previously run) template file from portal.azure.com and resource.azure.com, which is the Resource Manager in the Azure portal.  This shows the complete JSON of the final templates, which may have consisted of a number of “question” and “answer” files that are subsequently merged together to form the final file of configuration data.  From here, we can also inspect each of the individual various resources that have been created as part of running the template, for example, virtual machines etc.

Rik then mentions something called DSC.  This is Desired State Configuration.   This is now an engineering requirement for all MS products that will be cloud-based.  DSC effectively means that the “product” can be entirely configured by declarative things such as scripts, command line commands and parameters. etc.  Everything can be set and configured from here without needing to resort to any GUI.

IMG_20151024_095414 Rik talks about how to start creating your own templates.  He says the best place to start is probably the Azure Quickstart Templates that are available from a GitHub repository.  They contain both very simple templates to ease you into getting started with something simple, but also contain some quite complex templates which will help you should you need to create a template to deploy a complete environment of numerous resources.  Rik also mentions that next year will see the release of something called the “Azure Stack” which will make it even easier to create scripts and templates that will automate the creation and management of your entire IT infrastructure, both in the cloud and on-premise, too.

As well as supporting basic parameterization of values within an Azure Resource Template, you can also define entire sections of JSON that define a complete resource (i.e. an entire virtual machine complete with an instance of SQL Server running on it).  This JSON document can then be referenced from within other ART files, allowing individual resources to be scripted once and reused many times.  As part of this, Azure resources support many different types of extensions for extending state configuration into other products.  For example, there is an extension that allows an Azure VM to be created with an Octopus Deploy tentacle pre-installed, as well as an extension that allows a Chef client to be pre-installed on the VM, for example.

Rik shows us a sample layout of a basic Azure Resource Template project within Visual Studio.  It consists of 3 folders, Scripts, Templates and Tools.  There's a blank template in the template folder and this defines the basic "shape" of the template document.  To get started within a simple template for example, a Windows VM needs a Storage account (which can be an existing one, or can create new) and a Virtual Network before the VM can be created.

We can use the GUI tooling within Visual Studio to create the basic JSON document with the correct properties, but can then manually tweak the values required in order to script our resource creation.  This is often the best way to get started.  Once we’ve used the GUI tooling to generate the basics of the template, we can then remove duplication by "collapsing" lots of the properties and extracting them into separate files to be included within the main template script.  (i.e. deploy location is repeated for each and every VM.  If we’re deploying multiple VMs, we can remove this duplication by extracting into a separate file that is referenced by each VM).

One thing to remember when running and deploying ART’s, Rik warns us, is that the default lifetime of an Azure Access Token is only 1 hour.  Azure Access Tokens are required by the template in order to prove authorisation for creating Azure resources.  However, in the event that the ART is deploying a complete environment consisting of numerous resources, this can be a time-consuming process – often taking a few hours.  For this reason, it’s best to extend the lifetime of the Azure Access Tokens, at least during development of the templates, otherwise the tokens will expire during the running of the template, thereby making the resource creation fail.

Rik wraps us with a summary, and opens the floor to any questions.  One question that is posed is whether existing Azure Resources can be “reverse-engineered” to ART scripts.  Rik states that so long the existing resources are v2 resources (that have been created with Azure Resource Manager) then you can turn these resources into templates BUT, if existing resources are V1 (also known as Classic resources and created using the older Azure Service Management) they can't be reverse-engineered into templates.

IMG_20151024_105058 After a short coffee break back in the main communal area, it’s time for the second session of the day.  For this session, I decided to go with Gary Short’s Deep Dive into Deep Learning.

Gary’s session was all about the field of data science and of things like neural networks and deep learning.   Gary starts by asking who knows what Neural Networks are and asks what Deep Learning is and the difference between them.  Not very many people know the difference here, but Gary assures us that by the end of his talk, we all will.

Gary tells us that his talk’s agenda is about looking at Neural Networks, being the first real mechanism by which “deep learning” was first implemented, but how today’s “deep learning” has improved on the early Neural Networks.  We first learn that the phrase “deep learning” is itself far too broad to really mean anything!

So, what is a Neural Network?  It’s a “thing” in data science.  It’s a statistical learning model and can be used to estimate functions that can depend on a large number of inputs.  Well, that’s a rather dry explanation so Gary gives us an example.  The correlation between temperature over the summer months and ice cream sales over the summer months.  We could use a Neural Network to predict the ice cream sales based upon the temperature variance.  This is, of course, a very simplistic example and we can simply guess ourselves that as the temperature rises, ice cream sales would predictably rise too.  It’s a simplistic example as there’s exactly one input and exactly one output, so it’s very easy for us to reason about the outcome with really relying upon a Neural Network.  However, in a more realistic example using “big data”, we’d likely have hundreds if not many thousands of inputs for which we wish to find a meaningful output.

Gary proceeds to explain that a Neural Network is really a weighted directed graph.  This is a graph of nodes and the connections between those nodes.  The connections are in a specific direction, from one node to another, and that same node can have a connection back to the originating node.  Each connection has a “weight” or a probability.  In the diagram to the left we can see that node A has a connection to node E and also a separate connection to node F.  The “weight” of the connection to node F is 0.9 whilst the weight of the connection to node E is 0.1.  This means there a 10% chance that a message or data coming from node A will be directed to node E and a 90% chance that a message coming from node A will be directed to node F.  The overall combination of nodes and connections between the nodes overall gives us the Neural Network.

Gary tells us how Neural Networks are not new, they were invented in 1943 by two mathematicians, Warren McCulloch and Walter Pitts.  Back then, they weren’t referred to as Neural Networks, but were known as “Threshold Logic”.  Later on, in the late 1940's, Donald Hebb created a hypothesis based on "neural plasticity" which is the ability of a Neural Network to be able to “heal itself” around “injuries” or bad connections between nodes.  This is now known as Hebbian Learning.  In 1958, mathematicians Farley and Wesley A. Clark used a calculator to simulate a Hebbian Machine at MIT (Massachusetts Institute of Technology).

So, just how did today’s “Deep Learning” improve upon Neural Networks.  Well, Neural Networks originally had two key limitations.  Firstly, they couldn't process exclusive or (XOR) logic in a single-layer network, and secondly, computers (or rather calculators) simply weren't really powerful enough to perform the extensive processing required.  Eventually, in 1975, a mathematician named Werbos discovered something called “back propagation”, which is the backwards propagation of error states allowing originating nodes to learn of errors further down a processing chain and perform corrective measures (self-learning) to mitigate further errors.  Back propagation helped to solve the XOR problem.  It was only through the passage of a large amount of time, though, that yesterday’s calculators became today’s computers – which got ever more powerful with every passing year – and allowed Neural Networks to come into their own.  So, although people in academia were teaching the concepts of Neural Networks, they weren’t really being used, preferring instead alternative learning mechanisms like “Support Vector Machines” (SVM) which could work with the level of computing that was available at that time.  With the advent of more powerful computers, however, Neural Networks really started to take off after the year 2000.

So, as Neural Networks started to get used, another limitation was found with them.  It took a long time to “train” the model with the input data.  Gary tells us of a Neural Network in the USA, used by the USPS (United States Postal Service) that was designed to help recognise hand-written zip codes.  Whilst this model was effective at it’s job, it took 3 full days to train the model with input data!  This had to be repeated continually as new “styles” of hand-writing needed to be recognised by the Neural Network.

Gary continues by telling us that by the year 2006, the phrase “deep learning” had started to take off, and this arose out of the work of two mathematicians called Geoffrey Hinton and Ruslan Salakhutdinov and showed that many-layered, feed-forward Neural Networks could be trained far more effectively, thus reducing the time required to train the network.  So really, “deep learning” is really just modern day Neural Networks, but ones that have been vastly improved over the original inventions of the 1940’s. 

Gary talks about generative models and stochastic models.   Generative models will “generate” things in a random way, whilst stochastic model will generate things in an unpredictable way. Very often this is the very same thing.  It’s this random unpredictability that exists in the problem of voice recognition.  This is now a largely “solved” problem from around 2010.  It’s given rise to Apple’s Siri, Google’s Google Now and most recently, and apparently most advanced, Microsoft’s Cortana.

At this point, Gary shows us a demo of some code that will categorise Iris plants based upon a diverse dataset of a number of different criteria.  The demo is implemented using the F# language, however, Gary states that the "go to" language for Data Science is R.  Gary says that whilst it’s powerful, it not a very nice language and this is primarily put down to the fact that whilst languages like C, C#, F# etc. are designed by computer scientists, R is designed by mathematicians.  Gary’s demo can use F# as it has a “type provider” mechanism which allows it to “wrap” and therefore largely abstract away the R language.  This can be downloaded from NuGet and you’ll also need the FsLab NuGet package.

IMG_20151024_113640 Gary explains that the categorisation of Irises is the canonical example for data science categorisation.  He shows the raw data and how the untrained system initially thinks that there are three classifications of irises when we know there's only really two.  Gary then explains that, in order to train our Neural Network to better understand our data, we need to start by "predicting the past".  This is simply what is says, for example, by looking at the past results of (say) football matches, we can use that data to to help predict future results.

Gary continues and shows how after "predicting the past" and using the resulting data to then train the Neural Network, we can once again examine the original data.  The graph this time is correctly showing only two different categorisations of irises.  Looking closer at the results and we can see that of a data set that contains numerous metrics for 45 different iris plants, our Neural Network was able to correctly classify 43 out of the 45 irises, with only two failures.  Looking into the specific failures, we see that they were unable to be classified due to their data being very close between the two different classifications.  Gary says how we could probably “fine tune” our Neural Network by looking further info the data and could well eradicate the two classification failures.

IMG_20151024_104709 After Gary’s session, it’s time for another tea and coffee break in the communal area, after which it’s time for the 3rd and final session before lunch.  There had been a couple of last-minute cancellations of a couple of sessions due to speaker ill health, and one of those sessions was unfortunately the one I had wanted to attend in this particular time slot, Stephen Turner’s “Be Reactive, Think Reactive”.  This session was rescheduled with Robert Hogg delivering a presentation on Enterprise IoT (Internet of Things), however, the session I decided to attend, was Peter Shaw’s Microservice Architecture, What It Is, Why It Matters And How To Implement It In .NET.

Peter starts his presentation with a look at the talk’s agenda.   He’s going to define what Microservices are and their benefits and drawbacks.  He’ll explain how, within the .NET world, OWIN and Katana help us with building Microservices, and finally he is going to show a demo of some code that uses OWIN running on top of IIS7 to implement a Microservice.

IMG_20151024_120837 Firstly, Peter tells us that Microservices are not a software design pattern, they’re an architectural pattern.  They represent a 100-foot view of your entire application, not the 10-foot view, and moreover, Microservices provide a set of guidelines for deployment of your project.

Peter then talks about monolithic codebases and how we scale them by duplicating entire systems.  This can be wasteful when you only need to scale up one particular module as you’ll end up duplicating far more than you need.  Microservices is about being able to scale only what you need, but you need to find the right balance of how much to break down the application into it’s constituent modules or discreet chunks of functionality.  Break it down too much and  you'll get nano-services – a common anti-pattern - and will then have far too much complexity in managing too many small parts.  Break it down too little, and you’re not left with Microservices.  You’ve still got a largely monolithic application, albeit a slightly smaller one.

Next, Peter talks about how Microservices communicate to each other.  He states how there’s two schools of thought to approaching the communication problem.  One school of thought is to use an ESB (Enterprise Service Bus).  The benefits of using an ESB are that it’s a robust communications channel for all of the Microservices, however, a drawback is that it’s also a single point of failure.  The other school of thought is to use simple RESTful/HTTP communications directly between the various Microservices.  This removes the single point of failure, but does add the overhead of requiring the ability of each service to be able to “discover” other services (their availability and location for example) on the network.  This usually involves an additional tool, something like Consul, for example.

Some of the benefits of adopting a Microservices architecture are that software development teams can be formed around each individual service.  These would be full teams with developers, project managers etc. rather than having specific technical silos within one large team.  Other benefits are that applications become far more flexible and modular and can be composed and changed easily by simply swopping out one Microservice for another.

Some of the drawbacks of Microservices are that they have a potentially higher maintenance cost as your application will often be deployed across different and more expansive platforms/servers.  Other drawbacks are the potential for “data islands” to form.  This is where your application’s data becomes disjointed and more distributed due to the nature of the architecture.  Furthermore, Microservices, if they are to be successful, will require extensive monitoring.  Monitoring of every available metric of the applications and the communications between them is essential to enable effective support of the application as a whole.

After this, Peter moves on to show us some demo code, built using OWIN and NancyFX.  OWIN is the Open Web Interface for .NET and is an open framework for decoupling .NET web applications from the underlying web server that powers the application.  Peter tells us that Microsoft’s own implementation of the OWIN standard is called KatanaNancyFX is a lightweight web framework for .NET, and is built on top of the OWIN standard, thus decoupling the Nancy code from the underlying web server (i.e. there’s no direct references to HttpContext or other such objects).

Peter shows us how simple some of Nancy’s code can be:

public dynamic Something(){
    var result = GetSomeData();
    return result==null ? 404 : Result.AsJson();    
}

The last line of the code is most interesting.   Since the method returns a dynamic type, returning an integer that has the same value as a HTTP Status Code will be inferred by the Nancy framework to actually return that status code from the method!

Peter shows us some more code, most of which is very simple and tells us that the complete demo example is available to download from GitHub.

IMG_20151024_130929 After Peter’s talk wrapped up, it was time for lunch.  Lunch at the DDD events is usually a “brown bag” affair with a sandwich, crisps, some fruit and/or chocolate.  The catering at DDD North, and especially at the University of Sunderland, is always excellent and this year was no exception.   There was a large selection of various combinations of crisp flavours, chocolate bars and fruit along with a large selection of very nice sandwiches, including some of the more “basic” sandwich fillings for fusspots like me!  I don’t like mayonnaise, so pre-packed sandwiches are usually a tricky proposition.  This year, though, they had “plain” cheese and ham sandwiches with no additional condiments, which was excellent for me.

The excellent food was accompanied by a drink. I opted for water.  After collecting my lunch, I went off to find somewhere to sit and eat as well as somewhere that would be fairly close to a power point as I needed to charge my laptop.

IMG_20151024_131246 I duly found a spot that allowed me to both eat my lunch, charge my laptop and look out of the window onto the River Wear at what was a very nice day outside in sunny Sunderland!

IMG_20151024_131609 After fairly quickly eating my lunch, it was time for some lunchtime Grok Talks.  These are the 15-minute, usually fairly informal talks that often take place over lunch hour at many of these type of conferences and especially at DDD conferences.  During the last few DDD’s that I’d attended, I’d missed most of the Grok Talks for various reasons, but today, having already consumed my delicious lunch, I decided that I’d try to take them in.

By the time I’d reached the auditorium for the Grok Talks, I’d missed the first few minutes of the talk by Jeff Johnson all about Microsoft Azure and the role of Cloud Solution Architect at Microsoft.

Jeff first describes what Azure is, and explains that it’s Microsoft’s cloud platform offering numerous services and resources to individuals and companies of all sizes to be able to host their entire IT infrastructure – should they so choose – in the cloud. 

IMG_20151024_133830 Next, Jeff shows us some impressive statistics on how Azure has grown in only a few short years.  He says that the biggest problem that Microsoft faces with Azure right now is that they simply can’t scale their infrastructure quick enough to keep up with demand.  And it’s a demand that is continuing to grow at a very fast rate.  He says that Microsoft’s budget on expenditure for expanding and growing Azure is around 5-6 billion dollars per annum, and that Azure has a very large number of users even today.

Jeff proceeds by talking about the role of Cloud Solutions Architect within Microsoft.  He explains that the role involves working very closely with customers, or more accurately potential customers to help find projects within the customers’ inventory that can be migrated to the cloud for either increased scalability, general improvement of the application, or to make the application more cost effective.  Customers are not charged for the services of a Cloud Solutions Architect, and the Cloud Solutions Architects themselves seek out and identify potential customers to see if they can be brought onboard with Azure.

Finally, Jeff talks about life at Microsoft.  He states how Microsoft in the UK has a number of “hubs”, one each in Edinburgh, Manchester and London, but that Microsoft UK employees can live anywhere.  They’ll use the “hub” only occasionally, and will often work remotely, either from home or from a customer’s site.

After Jeff’s talk, we had Peter Bull and his In The Groove talk all about developing for Microsoft’s Groove Music.  Peter explains that Groove Music is Microsoft’s equivalent to Apple’s iTunes and Google’s Google Play Music and was formerly called Xbox Music.  Peter states that Groove Music is very amenable to allowing developers to create new applications using Groove Music as it offers both an API and an SDK.  The SDK is effectively a wrapper around the raw API.  Peter then shows us a quick demo of some of the nice touches to the API which includes the retrieval of album artwork.  The API allows retrieving album artwork of varying sizes and he shows us how, when requesting a small version of some album artwork that, for example, contains a face, the Groove API will use face detection algorithms to ensure that when dynamically resizing the artwork, the face remains visible and is not cropped out of the picture.

IMG_20151024_140031 The next Grok talk was by John Stovin and was all about a unit testing framework called Fixie.  John starts by asking, Why another unit testing framework?  He explains that Fixie is quite different from other unit testing frameworks such as NUnit or xUnit.  The creator of Fixie, Patrick Lioi, stated that he created Fixie as he wanted as much flexibility in his unit testing framework as he had with other frameworks he was using in his projects.  To this end, Fixie does not ship with any assertion framework, unlike NUnit and xUnit, allowing each Fixie user to choose his or her own Assertion framework.  Fixie is also very simple in how you author tests.   There’s no [Test] style attributes, no using Fixie statements at the top of test classes.   Each test class is simply a standard public class and each test method is simply a public method whos name ends in “Test”.  Test setup and teardown is similar to xUnit in that it simply uses the class constructor and Dispose methods to perform these functions.

IMG_20151024_140406 Interestingly, Fixie tests can inherit from a “Convention” base class which can change the behaviour of Fixie.  For example, a custom convention class can be implemented very simply to alter the behaviour of Fixie to be more like that of NUnit, with test classes decorated by a [TestFixture] attribute and test methods decorated by a [Test] attribute.  Conventions can control the discovery of tests, how tests are parameterized, how tests are executed and also how test output is displayed.

Fixie currently has lots of existing test-runners, including a command-line runner and a runner for the Visual Studio test explorer.  There’s currently a plug-in to allow ReSharper 8 to run Fixie tests, and a new plug-in/extension is currently being developed to work with ReSharper 10.  Fixie is open-source and is available on GitHub.

After John’s talk, we had the final Grok Talk of the lunch time, which was Steve Higgs’s ES6 Right Here, Right Now.  Steve’s talk is how developers can best use and leverage ES6 (ECMAScript 6 aka JavaScript 2015) today.  Steve starts by stating that, contrary to some beliefs, ES6 is no longer the “next” version of JavaScript, but is actually the “current” version.  The standard has been completely ratified, but most browsers don’t yet fully support it.

Steve talks about some of the nice features of ES6, many of which had to be implemented with 3rd-party libraries and frameworks.  ES6 has “modules” baked right in, so there’s no longer any need to use a 3rd-party module manager.  However, if we’re targeting today's browsers and writing JavaScript targeting ES5, we can use 3rd-party libraries to emulate these new ES6 features (for example, require.js for module management).

Steve continues by stating that ES6 will now (finally) have built-in classes.  Unfortunately, they’re not “full-featured” classes like we get in many other languages (such as C#, Java etc.) as they only support constructors and public methods, and have no support for things like private methods yet.  Steve does state that private methods can be “faked” in a bit of a nasty, hacky way, but ES6 classes definitely do not have support for private variables.  Steve states that this will come in the future, in ES7.

ES6 gets “arrow functions”, which are effectively lambda functions that we know and love from C#/LINQ, for example:

var a = [
  "Hydrogen",
  "Helium",
  "Lithium",
  "Beryl­lium"
];

// Old method to return length of each element.
var a2 = a.map(function(s){ return s.length });

// New method using the new "arrow functions".
var a3 = a.map( s => s.length );

Steve continues by stating that ES6 introduces the let and const keywords.  let gives a variable block scoping rather than JavaScript’s default function scoping.  This is a welcome addition, and helps those of us who are used to working with languages such as C# etc. where our variable scoping is always block scoped.  const allows JavaScript to declare a constant.

ES6 now also has default parameters which allow us to define a default value for a function’s parameter in the event that code calling the function does not supply a value:

function doAlert(a=1) {
    alert(a);
}

// Calling doAlert without passing a value will use the
// default value supplied in the function definition.
doAlert();

Steve also mentions how ES6 now has string interpolation, also known a “template strings”, so that we can finally write code such as this:

// Old way of outputting a variable in a string.
var a = 5;
var b = 10;
console.log("Fifteen is " + (a + b) + " and\nnot " + (2 * a + b) + ".");

// New ES6 way with string interpolation, or "template strings".
var a = 5;
var b = 10;
console.log(`Fifteen is ${a + b} and\nnot ${2 * a + b}.`);

One important point to note with string interpolation is that your string must be quoted using backticks (`) rather than the normal single-quote (‘) or double-quote (“) characters!  This is something that will likely catch a lot of people out when first using this new feature.

Steve rounds off his talk by stating that there’s lots of other features in ES6, and it’s best to simply browse through them all on one of the many sites that detail them.  Steve says that we can get started with ES6 today by using something like babeljs.io, which is a JavaScript compiler (or transpiler) and allows you to transpile JavaScript code that targets ES6 into JavaScript code that is compatible with the ES5 that is fully supported by today’s browsers.

After Steve’s talk, the Grok Talks were over, and with it the lunch break was almost over too.  There was a few minutes left to head back to the communal area and grab a cup of coffee and a bottle of water to keep me going through the two afternoon sessions and the two final sessions of the day.

IMG_20151024_143656 The first session of the afternoon was another change to the advertised session due to the previously mentioned cancellations.  This session was Pete Smith’s Beyond Responsive Design.  Pete’s session was aimed at design for modern web and mobile applications.  Pete starts with looking at a brief history of web development.  He says that the web started solely on the desktop and was very basic at first, but very quickly grew to become better and better.  Eventually, the Smartphone came along and all of these good looking desktop websites suddenly didn’t look so good anymore.

So then, Responsive Design came along.  This attempted to address the disconnect and inconsistencies between the designs required for the desktop and designs required for the mobile.  However, Responsive Design brought with it it’s own problems.  Our designs became awash with extensive media queries in order to determine which screen size we were rendering for, as well as became dependent upon homogenous (and often large) frameworks such as Zurb’s Foundation and Bootstrap.  Pete says that this is the focus of going “beyond” responsive design.  We can solve these problems by going back to basics and simplifying what we do.

So, how do we know if we've got a problem?  Well, Pete explains that there are some sites that work great on both desktop and mobile, but overall, they’re not as widespread as we would like given where we are in our web evolution.  Pete then shows some of the issues.  Firstly, we have what Pete calls the "teeny tiny" problem.  This is  where the entire desktop site is scaled and shrunk down to display on the smaller mobile screen size.  Then there's another problem that Pete calls "Indiana’s phone and the temple of zoom" which is where a desktop site, rendered on a mobile screen, can be zoomed in continuously until it becomes completely unusable.

Pete asks “what is a page on today’s modern web?”  Well, he says there’s no such thing as a single-page application.  There’s really no difference between SPA’s and non-SPA sites that use some JavaScript to perform AJAX requests to retrieve data from the server.  Pete states that there’s really no good guiding design principles.  When we’re writing apps for Android or iOS, there’s a wealth of design principles that developers are expected to follow, and it’s very easy for them to do so.  A shining example of this is Google’s Material Design.  When we’re designed for the web, though, not so much.

Dynamic-Data-Maksing-IbizaSo how do we improve?  Pete says we need to “design from the ground up”.  We need to select user-interface patterns that work well on both the desktop and on mobile.  Pete gives examples and states that UI elements like modal pop-ups and alerts work great on the desktop, but often not so well on mobile.  An example of a UI pattern that does work very well on both platforms are the “panes” (sometimes referred to as property sheets) that slide in from the side of the screen.  We see this used extensively on mobile due to the limited screen real estate, but not so much on the desktop, despite the pattern working well here also.  A great example of effective use of this design pattern is the new Microsoft Azure Preview Portal.  Pete states we should avoid using frameworks like Bootstrap or Foundation.  We should do it all ourselves and we should only revert to “responsive design” when there is a specific pattern that clearly works better on one medium than another and where no other pattern exists that works well on all mediums.

At this point in the talk, Pete moves on to show us some demo code for a website that he’s built to show off the very design patterns and features that he’s been discussing.  The code is freely available from Pete’s GitHub repository.  Pete shows his website first running on a desktop browser, then he shows the same website running on an iPad and then on a Smartphone.  Each time, due to clever use of design patterns that work well across screens of differing form factors, the website looks and feels very similar.  Obviously there are some differences, but overall, the site is very consistent.

Pete shows the code for the site and examines the CSS/LESS styles.  He says that absolute positioning for creating these kind of sites is essential.  It allows us to ensure that certain page elements (i.e.. the left-hand menu bar) are always displayed correctly and in their entirety.  He then shows how he's used CSS3 transforms to implement the slide in/out panels or “property sheets”, simply transforming them with either +100% or -100% of their horizontal positioning to display to the left or right of the element’s original, absolute position.  Pete notes how there’s extensive use of HTML5 semantic tags, such as <nav> <content> <footer> etc.  Pete reminds us that there’s no real behaviour attached to using these tags but that they make things far easier to reason about than simply using <div> tags for everything.

Finally, Pete summarises and says that if there’s only one word to take away from his talk it’s “Simplify”.  He talks about the future and mentions that the next “big thing” to help with building sites that work well across all of the platforms that we use to consume the web is Web Components.  Web Components aid encapsulation and re-usability.  They’re available to use today, however, they’re not yet fully supported.  In fact, they are only currently supported in Chrome and Opera browsers and need a third-party JavaScript library, Polymer.js, in order to work.

IMG_20151024_155657 The final session of the day was Richard Fennell’s Monitoring and Addressing Technical Debt With SonarQube.

Richard starts his session by defining technical debt.  He says it’s something that builds up very slowly, almost sneaks up on you.  It’s the little “cut corners” of our codebases where we’ve implemented code that seems to do the job, but is sub-optimal code.  Richard says that technical debt can grow to become so large that it can stop you in your tracks and prevent you from progressing with a project.

He then discusses the available tools that we currently have to address technical debt, but specifically within the world of Microsoft’s tooling.  Well, firstly we have compiler errors.  These are very easy to fix as we simply can’t ship our software with compiler errors, and they’ll provide immediate feedback to help us fix the problem.  Whilst compiler errors can’t be ignored, Richard says that it’s really not uncommon to come across projects that have many compiler warnings.  Compiler warnings aren’t errors as such, and as they don’t necessarily prevent us from shipping our code, we can often live with them for a long time.  Richard mentions the tools Visual Studio Code Analysis (previously known as FXCop) and StyleCop.  Code Analysis/FxCop works on your compiled code to determine potential problems or maintenance issues with the code, whilst StyleCop works on the raw source code, analysing it for style issues and conformance against a set of coding standards.  Richard says that both of these tools are great, but offer only a simple “snapshot in time” of the state of our source code.  What we really need is a much better “dashboard” to monitor the state of our code.

Richard asks, “So what would Microsoft do?”.  He continues to explain that the “old” Microsoft would go off and create their own solution to the problem, however, the “new” Microsoft, being far more amenable to adopting already-existing open source solutions, has decided to adopt the existing de-facto standard solution for analysing technical debt, a product called SonarQube by SonarSource.

Richard introduces SonarQube and states that, firstly, we must understand that it’s a Java based product.  This brings some interesting “gotchas” to the .NET developers when trying to set up a SonarQube solution as we’ll see shortly.  Richard states that SonarQube’s architecture is based upon it having a backend database to store the results of its analysis, and it also has plug-in analyzers that analyze source code.  Of course, being a Java-based product, SonarQube’s analyzers are written in Java too.  The Analyzers examine our source code and create data that is written into the SonarQube database.  From this data, a web-based front-end part of the SonarQube product can render a nice dashboard of this data in ways that help us to "visualise" our technical debt.   Richard points out that analyzers exist for many different languages and technologies, but he also offers a word of caution.  Not all analyzers are free and open source.  He states that the .NET ones currently are but (for example) the COBOL & C++ analyzers have a cost associated with them.

Richard then talks about getting an installation of SonarQube up and running.  As it’s a Java product, there’s very little in the way of nice wizards during the installation process to help us.  Lots of the configuration of the product is performed via manual editing of configuration files.  Due to this, Microsoft’s ALM Rangers group have produced a very helpful guide to help in installing the product.  The system requirements for installing SonarQube are a server running either Windows or Linux with a minimum of 1GB of RAM.  The server will need to have .NET framework 4.5.2 installed also, as this is required by the MSBuild runner which is used to run the .NET analyzer.  As it’s a Java product, obviously, Java is required to be installed on the server – either Oracle’s JRE 7 (or higher) or OpenJDK 7 (or higher).  For the required backend database SonarQube will, by default, install a database called H2, however this can be changed (and probably should be changed) to something more suited to .NET folks such as Microsoft’s SQL Server.  It’s worth noting that the free SQL Server Express will work just fine also.  Richard points out that there are some “gotchas” around the setup of the database, too.  As a Java-based product, SonarQube will be using JDBC drivers to connect to the database, and these place some restrictions on the database itself.  The database must have it’s collation set to Case Sensitive (CS) and Accent Sensitive (AS).  Without this, it simply won’t work!

After setup of the software, Richard explains that we’ll only get an analyzer and runner for Java source code out-of-the-box.  From here we can download and install the analyzer and runner we’ll need for analyzing C# source code.  He then shows how we need to add a special file called sonar-project.properties to the root of our project that will be analyzed.  This file contains four key values that are required in order for SonarQube to perform it’s analysis.  Ideally, we’d set up our installation of SonarQube on a build server, and there we’d also edit the SonarQube.Analyzers.xml file to reflect the correct database connection string to be used.

image3 Richard now moves onto showing us a demo.  He uses the OWASP demo project, WebGoat.NET for his demonstration.  This is an intentionally “broken” ASP.NET application which will allow SonarQube to highlight multiple technical debt issues with the code.  Richard shows SonarQube being integrated into Visual Studio Team Foundation Server 2015 as part of its build process.  Richard further explains that SonarQube analyzers are based upon processing complete folders or wildcards for file names.  He shows the  default SonarQube dashboard and explains how most of the errors encountered can often be found in the various “standard” libraries that we frequently include in our projects, such as jQuery etc.  As a result of this, it’s best to really think about how we structure our solutions as it’s beneficial to keep third-party libraries in folders separate from our own code.  This way we can instruct SonarQube to ignore those folders.

Richard shows us the rules that exist in SonarQube.  There are a number of built-in rules provided by SonarQube itself, but the C# analyzer plug-in will add many of it’s own.  These built-in SonarQube rules are called the “Sonar Way” rules and represent the expected Sonar way of writing code.  These are very Java-centric so may only be of limited use when analyzing C# code.  The various C# rule-sets are obviously more aligned with the C# language.  Some rules are prefixed with “CA” in the rule-set list and these are the FxCop rules, whilst other rules are prefixed with “S” in the rule-set list.  These are the C# language rules and use Roslyn to perform the code analysis (hence the requirement for the .NET framework 4.5.2 to be installed)

Richard continues by showing how we can set up “quality gates” to show if one of our builds is passing or failing in quality.  This is an analysis of our code by SonarQube as part of a build process.  We can set conditions on the history of the analyses that have been run to ensure that, for example, each successive build should have no more than 98% of known bugs of the previous release.  This way, we can reason that our builds are getting progressively better in quality each time.

Finally, Richard sums up by introducing a new companion product to SonarQube called SonarLint. SonarLint is based upon the same .NET Compiler platform, Roslyn, that provides SonarQube’s analysis, however SonarLint is designed to be run inside the Visual Studio IDE and provides near real-time instant feedback on the quality of our source code as we’re editing it.  SonarLint is open source and available on Github.

IMG_20151024_170210 After Richard’s talk was over, it was time for all of the conference attendees to gather in the main lecture hall for the final wrap-up presentation.  During the presentation, the various sponsors were thanked for all of their support.  The conference organisers did also mention how there had been a number of “no-shows” to the conference (people who’d registered to attend but had not shown up on the day and hadn’t cancelled their tickets despite repeated communication requesting people who can no longer attend to do so).  The organisers told us how every no-show not only costs the conference around £15 per person but also prevents those who were on the waiting list from being able to attend, and there was quite an extensive waiting list for the conference this year.  Here’s hoping that future DDD Conferences have less no-shows.

It was mentioned that DDD North is now the biggest of all of the regional DDD events, with some 450 (approx.) attendees this year – a growth on last year’s numbers – with still over 100 more people on the waiting list.  The organisers told us that they could, if it weren’t for space/venue size considerations, have run the conference for around 600-700 people.  That’s quite some numbers and shows just how popular the DDD conferences, and especially the DDD North conference, are.

2015-11-02 14_35_26-Technical whizzes set to share expertise in Sunderland - Sunderland EchoOne especially nice touch was that I did receive a quick mention from Andy Westgarth, the main organiser of DDD North, during the final wrap-up presentation for the use of one of my pictures for an article that had appeared in the local newspaper, the Sunderland Echo, that very day.  The picture used was one I’d taken in the same lecture hall at DDD North 2013, two years earlier.  The article is available to read online, too.

After the wrapping up came the prize draw.  As always, there was some nice prizes to be given away by both the conference organisers themselves as well as prizes to be given away by the individual sponsors including a Nexus 9 tablet, a Surface Pro 3 and a whole host of other goodies.  As was usual, I didn’t win anything, but I’d had a fantastic day at yet another superb DDD North.  Many thanks to the organisers and the various sponsors for enabling such a brilliant event to happen.

IMG_20151024_175352 But…  It wasn’t over just yet!   There is usually a “Geek Dinner” after the DDD conferences however on this occasion there was to be a food and drink reception graciously hosted by Sunderland Software City.  So as we shuffled out of the Sunderland University campus, I headed to my car to drive the short distance to Sunderland Software City.

IMG_20151024_175438 Upon arrival there was, unfortunately, no pop-up bar from Vaux Brewery as there had been two years prior.  This was a shame as I’m quite partial to a nice pint of real ale, however, the kind folks at Sunderland Software City had provided us with a number of buckets of ice-cold beers, wines and other beverages.  Of course, I was driving so I had to stick to the soft drinks anyway!

I was one of the first people to arrive at the Sunderland Software City venue as I’d driven the short distance from the University to get there, whereas most other people who were attending the reception were walking from the University.  I grabbed myself a can of Diet Coke and quickly got chatting to some fellow conference attendees sharing experiences about our day and getting to know each other and finding out what we do for a living and all about the work we do.

IMG_20151024_180512 Not too long after getting chatting, a few of the staff from the Centre were scurrying towards the door.  What we soon realised was that the “food” element of the “food & drink” reception was arriving.  We were being treated to what must be the single largest amount of pizza I’ve ever seen in one place.  76 delicious pizzas delivered from Pizza Hut!  Check out the photo of this magnificent sight!  (Oh, and those boxes of pizza are stacked two deep, too!)

So, once the pizzas had all been delivered and laid out for us on the extensive table top, we all got stuck in.  A few slices of pizza later and an additional can of Diet Coke to wash it down and it was back to mingling with the crowd for some more networking.

Before leaving, I managed to have a natter with Andy Westgarth, the main conference organiser about the trials and tribulations of running a conference like DDD North.  Despite the fact that Andy should be living and working in the USA by the time the next DDD North conference rolls around, he did assure me that the conference was in very safe hands and should continue on next year.

After some more conversation, it was finally time for me to leave and head off back to my in-laws in Newcastle.  And with that another superb DDD North conference was over.  Here’s looking forward to next year!

DDD South West 6 In Review

image(4) This past Saturday 25th April 2015 saw the 6th annual DDD South West event, this year being held at the Redcliffe Sixth Form Centre in image(2)Bristol.  This was my very first DDD South West event, having travelled south to the two DDD East Anglia events previously, but never to the south west for this one.

I’d travelled down south on the Friday evening before the event, staying in a Premier Inn in Gloucester.  This enabled me to only have a relatively short drive on the Saturday to get to Bristol and the DDD South West event.  After a restful night’s sleep in Gloucester, I started off on the journey to Bristol, arriving at one of the recommended car parks only a few minutes walk away from the DDDSW venue.

Upon arrival at the venue, I checked myself in and proceeded up the stairs to what is effectively the Sixth Form “common room”.  This was the main hall for the DDDSW event and where all the attendees would gather, have teas, coffees & snacks throughout the day.

image(7) Well, as is customary, the first order of business is breakfast!  Thanks to the generous sponsors of the event, we had ample amounts of tea, coffee and delicious danish pastries for our breakfast!  (Surprisingly, these delicious pastries managed to last through from breakfast to the first (and possibly second) tea-break of the day!)

image(10)Well, after breakfast there was a brief introduction from the organisers as to the day’s proceedings.  All sessions would be held in rooms  on the second floor of the building and all breaks, lunch and the final gathering for the customary prize draw would be held in the communal common room.  This year’s DDDSW had 4 main tracks of sessions with a further 5th track which was the main sponsors track.  This 5th track only had two sessions throughout the day whilst the other 4 had 5 sessions in each.

The first session of the day for me was “Why Service Oriented Architecture?” by Sean Farmar

image(9)Sean starts his talk by mentioning how "small monoliths" of applications can, over time and after many tweaks to functionality, become large monoliths and can become a maintenance nightmare which is both a high risk to the business and can lead to changes that are difficult to make and can have unforeseen side-effects.  When we’ve created a large monolith of an application, we’re frequently left with a “big ball of mud”.

Sean talks about one of his first websites that he created back in the early 1990’s.   It had around 5000 users, which by the standards of the day was a large number.  Both the internet and the web have grown exponentially since then, so 5000 users is very small by today’s standards.  Sean states that we can take those numbers and “add two noughts to the end” to get a figure for a large number of users today.  Due to this scaling of the user base, our application needs to scale too, but if we start on the path of creating that big ball of mud, we’ll simply create it far quicker today than we’ve ever done in the past.

Sean continues to state that after we learn from our mistakes with the monolithic big ball of mud, we usually move to web services.  We break a large monolith into much smaller monoliths, however, these webservices need to then talk both to each other as well to the consumers of the webservice. For example, the sales webservice has to talk to the user webservice which then possibly has to talk to the credit webservice in order to verify that a certain user can place an order of a specific size.  However, this creates dependencies between the various web services and each service becomes coupled in some way to one or more other services.  This coupling is a bad thing which prevents the individual web services from being able to exist and operate without the other webservices upon which it depends.

From here, we often look to move towards a Service Oriented Architecture (SOA).  SOA’s core tenets are geared around reducing this coupling between our services.

Sean mentions the issues with coupling:

Afferent (dependents) & Efferent (depends on) – These are the things that a given service depends upon and the other services that, in turn, depend upon the first service.
Temporal (time, RPC) – This is mostly seen in synchronous communications – like when a service performs a remote procedure call (RPC) to another service and has to wait for the response.  The time taken to deliver the response is temporal coupling of those services.
Spatial (deployment, endpoint address) – Sean explains this by talking about having numerous copies of (for example) a database connection string in many places.  A change to the database connection string can cause redeployments of complete parts of the system.

After looking at the problems with coupling, Sean moves on to looking at some solutions for coupling:  If we use XML (or even JSON) over the wire, along with XSD (or JSON Schema) we can define our messages and the transport of our messages using industry standards allowing full interoperability.  To overcome the temporal coupling problems, we should use a publisher/subscriber (pub/sub) communication mechanism.  Publishers do not need to know the exact receivers of a message, it’s the subscribers responsibility to listen and respond to messages that it is interested in when the publisher publishes the message.  To overcome the spatial issues, we can most often use a central message queue or service bus.  This allows publishers and subscribers to communicate with each other without hard references to the location of the publisher or subscriber on the network, they both only need to communicate to the single message bus endpoint.  This frees our application code from ever knowing who (or where) we  are “talking to” when sending a command or event message to some other service within the system, pushing these issues down to being an infrastructure rather than an application level concern.  Usage of a message bus also gives us durability (persistence) of our messages meaning that even if a service is down and unavailable when a particular event is raised, the service can still receive and respond to the event when it becomes available again at a later time. 

arch Sean then shows us a diagram of a  typical n-tier architecture system.  He mentions how “wide” the diagram is and how each “layer” of the application spans the full extent of that part of the system (i.e. the UI later is a complete layer than contains all of the UI for the entire system).  All of these wide horizontal layers are dependent upon the layer above or beneath it.

Within a SOA architecture, we attempt to take this n-tier design and “slice” the diagram vertically.  Therefore each of our smaller services each contain all of the layers - a service endpoint, business logic, data access layer and database - each in thin, focused vertical slices for specific focused areas of functionality.

arch2 Sean remarks that if we're going to build this kind of system, or modify an existing n-tier system into these vertical slices of services, we must start at the database layer and separate that out.  Databases have their own transactions, which in a large monolithic DB can lock the whole DB, locking up the entire system.  This must be avoided at all costs.

Sean continues to talk about how our services should be designed.  Our services should be very granular.  i.e. we shouldn't have an "UpdateUser" method that performs creation and updates of all kinds of properties of a "User" entity, we should have separate "CreateUser", "UpdateUserPassword", "UpdateUserPhoneNumber" methods instead.  The reason is that, during maintenance, constantly extending an "UpdateUser" method will force it to take more and more arguments and parameters and will grow extensively in lines of code as it tries to handle more and more properties of a “user” entity and it thus become unwieldy.  A simpler "UpdateUserPassword" is sufficiently granular enough that it'll probably never need to change over its lifetime and will only ever require 1 or 2 arguments/parameters to the method. 

Sean then asks how many arguments our methods should take.  He says his own rule of thumb for maximum arguments to any method is 2.  Once you find yourself needing 3 arguments, it's time to re-think and break up the method and create another new one.   By slicing the system vertically we do end up with many many methods, however, each of these methods are very small, very simple and are very specific with individual specific concerns.

Next we look at synchronous vs asynchronous calls.  Remote procedure calls (RPC) will usually block and wait as one service waits for a reply from another.  This won’t scale in production to millions of users.  We should use the pub/sub mechanism which allows for asynchronous messaging allowing services that require data from other services to not have to wait and block while the other service provides the data, it can subscribe to a message queue and be notified of the data when it's ready and available.

Sean goes on to indicate that things like a user’s address can be used by many services, however, it’s all about the context in which that piece of data is used by that service.  For this reason it’s ok for our system to have many different representations of, effectively, the same piece of data.  For example, to an accounting service, a user’s address is merely a string that gets printed onto a letter or an invoice and it has no further meaning beyond that.  However, to a shipping service, the user’s address can and probably will affect things like delivery timescales and shipping costs.

Sean ends his talk by explaining that, whilst a piece of data can be represented in different ways by different parts of the system, only one service ever has control to write that data whereas all other services that may need that data in their own representation will only ever be read-only.

 

image (15) The next session was Richard Dalton’s “Burnout”.  This was a fascinating session and is quite an unusual talk to have at these DDD events, albeit a very important talk to have, IMHO.  Richard’s session was not about a new technology or method of improving our software development techniques as many of the other sessions at the various DDD events are, but rather this session was about the “slump” that many programmers, especially programmers of a certain age, can often feel.  We call this “burnout”.

Richard started by pointing out that developer “burnout” isn’t a sudden “crash-and-burn” explosion that suddenly happens to us, but rather it’s more akin to a candle - a slow burn that gradually, but inevitably, burns away.  Richard wanted to talk about how burnout affected him and how it can affect all of us, and importantly, what can we do to overcome the burnout if and when it happens to us.  His talk is about “keeping the fire alive” – that motivation that gets you up in the morning and puts a spring in your step to get to work, get productive and achieve things.

Richard starts by briefly running through the agenda of his talk.  He says he’ll talk about the feelings of being a bad programmer, and the “slump” that you can feel within your career, he’ll talk about both the symptoms and causes of burnout, discuss our expectations versus the reality of being a software developer along with some anti-patterns and actions.

We’re shown a slide of some quite shocking statistics regarding the attrition rate of programmers.  Computer Science graduates were surveyed to see who was still working as a programmer after a certain length of time.  After 6 years, the amount of CS graduates still working as a programmer is 57%, however after 20 years, this number is only 19%.  It’s clear that the realistic average lifespan of a programmer is perhaps only around 20-30 years.

Richard continues by stating that there’s really no such thing as a “computer programmer” anymore – there no longer a job titled as such.  We’re all “software developers” these days and whilst that obviously entails programming of computers, it also entails far more tasks and responsibilities.  Richard talks about how his own burnout started and he first felt it was at least partially caused by his job and his then current employer.  Although a good and generous employer, they were one of many companies who claimed to be agile, but really only did enough to be able to use the term without really becoming truly agile.  He left this company to move to one that really did fully embrace the agile mantra however due to lots of long-standing technical debt issues, agile didn’t really seem to be working for them.  Clearly, the first job was not the cause (or at least not the only cause) of Richard’s burnout.  He says how every day was a bad day, so much so that he could specifically remember the good days as they were so rare and few and far between.

He felt his work had become both Dull and Overwhelming.  This is where the work you do is entirely unexciting with very little sense of accomplishment once performed, but also very overwhelming which was often manifested by taking far longer to accomplish some relatively simple task than should really have been taken, often due to “artificial complexity”.  Artificial complexity is the complexity that is not inherent within the system itself, but rather the complexity added by taking shortcuts in the software design in the name of expediency.  This accrues technical debt, which if not paid off quickly enough, leads to an unwieldy system which is difficult to maintain.  Richard also states how from this, he felt that he simply couldn’t make a difference.  His work seemed almost irrelevant in the grand scheme of things and this leads to frustration and procrastination.  This all eventually leads to feelings of self-doubt.

Richard continues talking about his own career and it was at this point he moved to Florida in the US where he worked for 1.5 years.  This was a massive change, but didn’t really address the burnout and when Richard returned he felt as though the entire industry had moved on significantly in those 1.5 years when he was away, whilst he himself had remained where he was before he went.  Richard wondered why he felt like this.  The industry had indeed changed in that time and it’s important to know that our industry does change at a very rapid pace.  Can we handle that pace of change?  Many more developers were turning to the internet and producing blogs of their own and the explosion of quality content for software developers to learn from was staggering.  Richard remarks that in a way, we all felt cleverer after reading these blogs full of useful knowledge and information, but we all feel more stupid as we feel that others know far more than we do.  What we need to remember is that we’re reading the blogs showing the “best” of each developer, not the worst.

We move on to actually discuss “What is burnout?”  Richard states that it really all starts with stress.  This stress is often caused by the expectation vs. reality gap – what we feel we should know vs. what we feel we actually do know.  Stress then leads to a cognitive decline.  The cognitive decline leads to work decline, which then causes further stress.  This becomes a vicious circle feeding upon itself, and this all starts long before we really consider that we may becoming burnt out.  It can manifest itself as a feeling of being trapped, particularly within our jobs and this leads itself onto feeling fatigued.  From here we can become irritable, start to feel self-doubt and become self-critical.  This can also lead to feeling overly negative and assuming that things just won’t work even when trying to work at them.  Richard uses a phrase that he felt within his own slump - “On good days he thought about changing jobs.  On bad days he thought about changing career”!  Richard continues by stating that often the Number 1 symptom of not having burnout is thinking that you do indeed have it.  If you think you’re suffering from burnout, you probably aren’t but when you do have it, you’ll know.

Now we’re moving on to look at what actually leads to burnout?  This often starts with a set of unclear expectations, both in our work life, but in our general life as a software developer.  It can also come from having too many responsibilities, sleep and relaxation issues and a feeling of sustained pressure.  This often all occurs within the overarching feelings of a weight of expectation versus the reality of what can be achieved.

Richard states that it was this raised expectation of the industry itself (witness the emergence of agile development practices, test-driven development practices and a general maturing of many companies’ development processes and practices in a fairly short space of time) and the disconnect with reality, which Richard felt simply didn’t live up to the expectations that ultimately lead to him feeling a great amount of stress.  For Richard, it was specifically around what he felt was a “bad” implementation of agile software development which actually created more pressure and artificial work stress.  The implementation of a new development practice that is supposed to improve productivity naturally raises expectations, but when it goes wrong, it can widen the gap between expectation and reality causing ever more stress.  He does further mention that this trigger for his own feelings of stress may or may not be what could cause stress in others.

Richard talks about some of the things that we do as software developers that can often contribute to the feelings of burnout or of increasing stress.  He discusses how software frameworks – for example the recent explosion of JavaScript frameworks – can lead to an overwhelming amount of choice.  Too much choice then often leads to paralysis and Richard shares a link to an interesting video of a TED talk that confirms this.  We then move on to discuss software side projects.  They’re something that many developers have, but if you’re using a side-project as a means to gain fulfilment when that fulfilment is lacking within your work or professional life, it’s often a false solution.  Using side-projects as a means to try out and learn a new technology is great, but they won’t fix underlying fulfilment issues within work.  Taking a break from software development is great, however, it’s often only a short-term fix.  Like a candle, if there’s plenty of wax left you can extinguish the candle then re-light it later, however, if the candle has burned to nothing, you can’t simply re-ignite the flame.  In this case, the short break won’t really help the underlying problem.

Richard proceeds to the final section of his talk and asks “what can we do to combat burnout?”  He suggests we must first “keep calm and lower our expectations!”.  This doesn’t mean giving up, it means continuing to desire the professionalism within both ourselves and the industry around us, but acknowledging and appreciating the gap that exists between expectation and reality.  He suggests we should do less and sleep more. Taking more breaks away from the world of software development and simply “switching off” more often can help recharge those batteries and we’ll come back feeling a lot better about ourselves and our work.  If you do have side-projects, make it just one.  Many side-projects is often as a result of starting many things but finishing none.  Starting only one thing and seeing it through to the finish is a far better proposition and provides for a far greater sense of accomplishment.  Finally, we look at how we can deal with procrastination.  Richard suggests one of the best ways to overcome it in work is to pair program.

Finally, Richard states that there’s no shame in burnout.  Lots of people suffer from it even if they don’t call it burnout, whenever you have that “slump” of productivity it can be a sign that it’s time to do something about it.  Ultimately, though, we each have to find our own way through it and do what works for us to overcome it.

 

image (19) The final talk before lunch was on the sponsor’s track, and was “Native Cross-Platform mobile apps with C# & Xamarin.Forms” by Peter Major.  Peter first states his agenda with this talk and that it’s all about Xamarin, Xamarin.Forms and what they both can and can’t do and also when you should use one over the other.

Peter starts by indicating that building mobile apps today is usually split between taking a purely “native” approach – where we code specifically for the target platform and often need multiple teams of developers for each platform we’ll be supporting – versus a “hybrid” approach which often involves using technologies like HTML5 and JavaScript to build a cross-platform application which is then deployed to each specific platform via the use of a “container” (i.e. using tools like phonegap or Apache’s Cordova).

Peter continues by looking at what Xamarin is and what is can do for us.  Xamarin allows us to build mobile applications targeting multiple platforms (iOS, Android, Windows Phone) using C# as the language.  We can leverage virtually all of the .NET or Mono framework to accomplish this.  Xamarin provides “compiled-to-native” code for our target platforms and also provides a native UI for our target platforms too, meaning that the user interface must be designed and implemented using the standard and native design paradigms for each target platform.

Peter then talks about what Xamarin isn’t.  It’s not a write-once, run-anywhere UI, and it’s not a replacement for learning about how to design effective UI’s for each of the various target platforms.  You’ll still need to know the intricacies for each platform that you’re developing for.

Peter looks at Xamarin.iOS.   He states that it’s AOT (Ahead-Of-Time) compiled to an ARM assembly.  Our C# source code is pre-compiled to IL which in turn is compiled to a native ARM assembly which contains the MONO framework embedded within it.  This allows us as developers to use virtually the full extent of the .NET / Mono framework.  Peter then looks at Xamarin.Android.  This is slightly different to Xamarin.iOS as it’s still compiled to IL code, but then the IL code is JIT (Just-In-Time) compiled inside of a MONO Virtual Machine within the Android application.  It doesn’t run natively inside the Dalvik runtime on Android.  Finally, Peter looks at Xamarin.WindowsPhone.  This is perhaps the simplest to understand as the C# code is compiled to IL and this IL can run (in a Just-In-Time manner) directly against the Windows Phone’s own runtime.

Peter then looks at whether we can use our favourite SDK’s and NuGet Packages in our mobile apps.  Generally, the answer is yes.  SDK’s such as Amazon’s Kinesis for example are fully usable, but NuGet packages need to target PCL’s (Portable Class Libraries) if they’re to be used.

Peter asks whether applications built with Xamarin run slower than pure native apps, and the answer is that they generally run at around the same speed.  Peter shows some statistics around this however, he does also state that the app will certainly be larger in size than a natively written app.  Peter indicates, though, that Xamarin does have a linker and so it will build your app with a cut-down version of the Mono Framework so that it’ll only include those parts of the framework that you’re actually using.

We can use pretty much all C# code and target virtually all of the .NET framework’s classes when using Xamarin with the exception of any dynamic code, so we can’t target the dynamic language runtime or use the dynamic keyword within our code.  Because of this, usage of certain standard .NET frameworks such as WCF (Windows Communication Foundation) should be done very carefully as there can often be dynamic types used behind the scenes.

Peter then moves on to talk about the next evolution with Xamarin, Xamarin.Forms.  We’re told that Xamarin.Forms is effectively an abstraction layer over the disparate UI’s for the various platforms (iOS, Android, Windows Phone).  Without Xamarin.Forms, the UI of our application needs to be designed and developed to be specific for each platform that we’re targeting, even if the application code can be shared, but with Xamarin.Forms the amount of platform specific UI code is massively reduced.  It’s important to note that the UI is not completely abstracted away, there's still some amount of specific code per platform, but it's a lot less than when using "standard" Xamarin without Xamarin.Forms.

Developing with Xamarin.Forms is very similar to developing a WPF (Windows Presentation Foundation) application.  XAML is used for the UI mark-up, and the premise is that it allows the developer to develop by feature and not by platform.  Similarly to WPF, the UI can be built up using code as well as XAML mark-up, for example:

Content = new StackPanel().AddChildren(new Button() { Content = "Normal" });

Xamarin.Forms works by taking our mark-up that defines the placement of Xamarin.Forms specific “controls” and user interface elements and converting them using a platform-specific “renderer” to a native platform control.  By default, using the standard build-in renderers means that our apps won’t necessarily “look" like the native apps you’d find on the platform.  You can customize specific UI elements (i.e. a button control) for all platforms, or you can make the customisation platform specific.  This is achieved with a custom Renderer class that inherits from the EntryRenderer and adds the required customisations that are specific to the platform that is being targeted.

Peter continues to tell us that Xamarin.Forms apps are best developed using the MVVM pattern.  MVVM is Model-View-ViewModel and allows a good separation of concerns when developing applications, keeping the application code separate from the user interface code.  This mirrors the best-practice for development of WPF applications.  Peter also highlights the fact that most of the built-in controls will provide two-way data binding right out of the box.  Xamarin.Forms has "attached properties" and triggers.  You can "watch" a specific property on a UI element and in response to changes to the property, you can alter other properties on other UI elements.  This provides a nice and clean way to effectively achieve the same functionality as the much old (and more verbose) INotifyPropertyChanged event pattern provides.

Peter proceeds to talk about how he performs testing of his Xamarin and Xamarin.Forms apps.  He says he doesn’t do much unit testing, but performs extensive behavioural testing of the complete application instead.  For this, he recommends using Xamarin’s own Calabash framework for this.

Peter continues by explaining how Xamarin.Forms mark-up contains built-in simple behaviours so, for example, you can check a textbox's input is numeric without needing to write your own code-behind methods to perform this functionality.  It can be as simple as using mark-up similar to this:

<Entry Placeholder="Sample">
  <Entry.Behaviors>
    <Entry.NumericTextboxBehaviour>
  </Entry.Behaviors>
</Entry>

Peter remarks about speed of Xamarin.Forms developed apps and concludes that they are definitely slower than either native apps or even normal Xamarin developed apps.  This is, unfortunately, the trade-off for the improved productivity in development.

Finally, Peter concludes his talk by summarising his views on Xamarin.Forms.  The good:  One UI Layout and very customizable although this customization does come with a fair amount of initial investment to get platform-specific customisations looking good.  The bad:  Xamarin.Forms does still contain some bugs which can be a development burden.  There’s no XAML “designer” like there is for WPF apps – it all has to be written in a basic mark-up editor. Peter also states how the built-in Xamarin.Forms renderers can contain some internal code that is difficult to override, thus limiting the level of customization in certain circumstances.  Finally, he states that Xamarin.Forms is not open source, which could be a deciding factor for adoption by some developers.

 

IMG_20150425_131838 After Peter’s talk it was time for lunch!  Lunch at DDDSW was very fitting for the location in which we were in, the South-West of England.  As a result, lunch consisted of a rather large pasty of which we could choose between Steak or Cheese & Onion varieties, along with a packet of crisps, and a piece of fruit (a choice of apples, bananas or oranges) along with more tea and coffee!  I must say, this was a very nice touch – especially having some substantial hot food and certainly made a difference from a lot of the food that is usually served for lunch at the various DDD events (which is generally a sandwich with no hot food options available).

IMG_20150425_131849 After scoffing my way through the large pasty, my crisps and the fruit – after which I was suitably satiated – I popped outside the building to make a quick phone call and enjoy some of the now pleasant and sunny weather that had overcome Bristol.

IMG_20150425_131954 After a pleasant stroll around outdoors during which I was able to work off at least a few of the calories I’d just consumed, I headed back towards the Redcliffe Sixth Form Centre for the two remaining sessions of the afternoon.

I headed back inside and headed up the stairs to the session rooms to find the next session.  This one, similar to the first of the morning was all about Service Oriented Architecture and designing distributed applications.

image (1) So the first of the afternoon’s sessions was “Introduction to NServiceBus and the Particular Platform” by Mauro Servienti.  Mauro’s talk was to be an introduction to designing and building distributed applications with a SOA (Service Oriented Architecture) and how we can use a platform like NServiceBus to easily enable that architecture.

Mauro first starts with his agenda for the talk.  He’ll explain what SOA is all about, then he’ll move on to discuss long running workflows in a distributed system and how state can be used within.  Finally, he’ll look at asynchronous monitoring of asynchronous processes for those times when something may go wrong and allow us to see where and when it did.

Mauro starts by explaining the core tenets of NServiceBus.  Within NServiceBus, all boundaries are explicit.  Services are constrained and cannot share things between them.  Services can share schema and a contract but never classes.  Services are also autonomous, and service compatibility is based solely upon policy.

NServiceBus is built around messages.  Messages are divided into two types, commands and events. Each messages is an atomic piece of information and is used to drive the system forward in some way.  Commands are imperative messages and are directed to a well known receiver.  The receiver is expected (but not compelled) to act upon the command.  Events are also messages that are an immutable representation of something that has already happened.  They are directed to anyone that is interested.  Commands and events are messages with a semantic meaning and NServiceBus enforces the semantic of commands and events - it prevents trying to broadcast a Command message to many different, possibly unknown, subscribers and enforces this kind of “fire-and-forget” publishing only to Event messages.

We’re told about the two major messaging patterns.  The first is request and response.  Within the request/response pattern, a message is sent to a known destination - the sender knows the receiver perfectly but the receiver doesn't necessarily know the sender.  Here, there is coupling between the sender and the receiver.  The other major message pattern is publish and subscribe (commonly referred to as pub/sub).  This pattern has constituent parts of the system become “actors”, and each “actor” in the system can act on some message that is received.  Command messages are created and every command also raises an event message to indicate that the command was requested.  These event messages are published and subscribers to the event can subscribe and receive these events without having to be known to the command generator.  Events are  broadcast to anyone interested and subscribers can subscribe, listen and act on the event, or not act on the event.  Within a pub/sub system, there is much less coupling between the system’s constituent parts, and the little coupling that exists is inverted, that is, the subscriber knows where the publish is, not the other way round.

In a pub/sub pattern, versioning is the responsibility of the publisher.  The publisher can publish multiple versions of the same event each time an event is published.  This means that we can have numerous subscribers, each of which can be listening for, and acting upon different versions of the same event message.  As a developer using NServiceBus, your job is primarily to write message handlers to handle the various messages passing around the system.  Handlers must be stateless.  This helps scalability as well as concurrency.  Handlers live inside an “Endpoint” and are hosted somewhere within the system.  Handlers are grouped by "services" which is a logical concept within the business domain (i.e. shipping, accounting etc.).  Services are hosted within Endpoints, and Endpoint instances run on a Windows machine, usually as a Windows Service.

NServiceBus messages are simply classes.  They must be serializable to be sent over the wire.  NServiceBus messages are generally stored and processed within memory, but can be made durable so that if a subscriber fails and is unavailable (for example, the machine has crashed or gone down) these messages can be retrieved from persistent storage once the machine is back online.

NServiceBus message handlers are also simply classes, which implement the IHandleMessages generic interface like so:

public class MyMessageHandler : IHandleMessages<MyMessage>
{
}

So here we have a class defined to handle messages implemented by the class MyMessage.

NServiceBus endpoints are defined within either the app.config or the web.config files within the solution:

<UnicastBusConfig>
  <MessageEndpointMappings>
    <add Assembly="MyMessages" Endpoint="MyMessagesEndpoint" />
  </MessageEndpointMappings>
</UnicastBusConfig>

Such configuration settings are only required on the Sender of the message.  There is no need to configure anything on the message receiver.

NServiceBus has a BusConfiguration class.  You use it to define which messages are defined as commands and which are defined as events.  This is easily performed with code such as the following:

var cfg = new BusConfiguration();

cfg.UsePersistence<InMemoryPersistence>();
cfg.Conventions()
    .DefiningCommandsAs( t => t.Namespace != null && t.Namespace.EndsWith( ".Commands" ) )
    .DefiningEventsAs( t => t.Namespace != null && t.Namespace.EndsWith( ".Events" ) );

using ( var bus = Bus.Create( cfg ).Start() )
{
    Console.Read();
}

Here, we’re declaring that the Bus will use in-memory persistence (rather than any disk-based persistence of messages), and we’re saying that all of our command messages are defined within a namespace that ends with the string “.Commands” and that all of our event messages are defined within a namespace ending with the string “.Events”.

Mauro then shows all of this theory with some code samples. He has an extensive set of samples that show all virtually all aspects of NServiceBus and this solution is freely available on GitHub at the following URL:  https://github.com/mauroservienti/NServiceBus.Samples

Mauro goes on to state that when sending and recieving commands, the subscriber will usually work with concrete classes when handling messages for that specific command, however, when sending or receiving event messages, the subscriber will work with interfaces rather than concrete classes.  This is a best practice and helps greatly with versioning.

NServiceBus allows you to use your own persistence store for persisting messages.  A typical store used is RavenDB, but virtually anything can be used.  There's only two interfaces that need to be implemented by a storage provider, and many well-known databases and storage mechanisms (RavenDB, NHibernate/SQL Server etc.) have integrations with NServiceBus such that they can be used as persistent storage. NServiceBus can also use third-party message queues.  MSMQ, RabbitMQ, SQL Server, Azure ServiceBus etc. can all be used.  By default NServiceBus uses the built-in Windows MSMQ for the messaging.

Mauro goes on to talk about state.  He asks, “What if you need state during a long-running workflow of message passing?”  He explains how NServiceBus accomplishes this using “Sagas”.    Sagas are durable, stateful and reliable, and they guarantee state persistence across message handling.  They can express message and state correlation and they empower "timeouts" within the system to make decisions in an asynchronous world – i.e. they allow a command publisher to be notified after a specific "timeout" of elapsed time as to whether the command did what was expected or if something went wrong.   Mauro demonstrates this using his NServiceBus sample code.

Mauro explains how the business endpoints are responsible for storing the business state used at each stage (or step) of a saga.  The original message that kicks off a saga only stores the "orchestration" state of the saga (for example, an Order management service could start a  saga that uses an Order Creation service, a Warehouse Service and a Shipping service that creates an order, picks the items to pack and then finally ships them).

The final part of Mauro’s talk is about monitoring and how we can monitor the various messages and flow through all of the messages passing around an inherently asynchronous system.  He states that auditing is a key feature, and that this is required when we have many asynchronous messages floating around a system in a disparate fashion.  NServiceBus provides some "behind-the-scenes" part of the software called "ServiceControl".  ServiceControl sits in the background of all components within a system that are publishing or subscribing to NServiceBus messages and it keeps it's own copy of all messages sent and received within that entire system.  It therefore allows us to have a single place where we can get a complete overview of all of the messages from the entire system along with their current state.

serviceinsight-sagaflow The company behind NServiceBus also provides separate software called “ServiceInsight”, which Mauro quickly demonstrates to us showing how it provides a holistic overview and monitoring of the entire message passing process and the instantiation and individual steps of long-running sagas.  It displays all of this data in a user interface that looks not dissimilar to a SSIS (SQL Server Integration Service) workflow diagram. 

Mauro states that handling asynchronous messages can be hard.  In a system built with many disparate messages, we cannot ever afford to lose a single message.  To prevent message loss, Mauro says that we should never use try/catch blocks within our business code.  He states that NServiceBus will automatically "add" this kind error handling within the creation, generation and sending of messages.  We need to consider transient failures as well as business errors.  NServiceBus will perform it’s own retries for transient failures of messages but business errors must be handled by our own code.  Eventually, transient errors in sent messages that fail to be delivered after the configured amount of maximum retries are placed into a special error message queue by NServiceBus itself, and this allows us to handle these failed messages in this queue as special cases.  To this end, Particular Software also have a separate piece of software called "ServicePulse" which allows monitoring of the entire the infrastructure.  This includes all message endpoints to see if they’re up and available to send/receive messages and well as full monitoring of the failed message queue.

IMG_20150425_155100image (3) After Mauro’s talk it was time for another break.  Unlike the earlier breaks throughout the day, this one was a bit special.  As well as the usual teas and coffees that were available all day long, this break treated all of the attendees to some lovely cream teas!  This was a very pleasant surprise and ensured that all conference attendees were incredibly well-fed throughout the entire conference.  Kudos to the organisers, and specifically the sponsors who allowed all this to be possible.

 

After our lovely break with the coffee and cream teas, it was on to the second session of the afternoon and indeed, the final session of the whole DDD event.  The final session was entitled “Monoliths to Microservices : A Journey”, presented by Sam Elamin.

IMG_20150425_160220 Sam works for Just Eat, and his talk is all about how he has gone on a journey within his work to move from large, monolithic applications to re-implementing the required functionality in a more leaner, distributed system composed largely of micro-services.

Sam firsts mentions the motivation behind his talk: Failure.  He describes how we learn from our failures, and states that we need to talk about our failures more as it’s only from failure that we can actually really improve. 

He asks, “Why do we build monoliths?”  As developers, we know it will become painful over time but we build these monolithic systems because we’re building a system very quickly in order to ship it fast.  People then use our system and we, over time, add more and more features into it. We very rarely, if ever, get the opportunity to go back and break things down into better structured code and implement a better architecture.  Wanting to spend time performing such work is often a very hard sell to the business as we’re talking to them about a pain that they don’t feel.  It’s only the developers who feel the pain during maintenance work.

Sam then states that it’s not all a bed of roses if we break down our systems into smaller parts.  Breaking down a monolithic application into smaller components reduces the complexity of each individual component but that complexity isn’t removed from the system.  It’s moved from within the the individual components to the interactions between the components.

Sam shares a definition of "What is a microservice?"  He says that Greg Young once said, "It’s anything you can rewrite in a week".  He states that a micro service should be a "business context", i.e. a single business responsibility and discrete piece of business functionality.

But how do we start to move a monolithic application to a smaller, microservices-based application?  Well, Sam tells us that he himself started with DDD (Domain Driven Design) for the design of the application and to determine bounded contexts – which are the distinct areas of services or functionality within the system.  These boundaries would then communicate, as the rest of the system communicated, with messages in a pub/sub (Publisher/Subscriber) mechanism, and each conceptual part of the system was entirely encapsulated by an interface – all other parts of the system could only communicate through this interface.

Sam then talks about something that they hadn’t actually considered when the first started on the journey: Race Hazards.  Race Hazards, or Race Conditions as they can also be known, within a distributed message-based architecture are when there are failures in the system due to messages being lost or being recieved out of order and the inability of the system to deal with this.  Testing for these kind of failures is hard as asynchronous messages can be difficult to test by their very nature. 

Along the journey, Sam discovered that things weren’t proceeding as well as expected.  The boundaries within the system were unclear and there was no clear ownership of each bounded context within the business.  This is something that is really needed in order for each context to be accurately defined and developed.  It’s also really important to get a good ubiquitous language - which is a language and way of talking about the system that is structured around the domain model and used by all team members to connect all the activities of the team with the software - correct so that time and effort is not wasted trying to translate between code "language" and domain language.

Sam mentioned how the teams’ overly strict code review process actually slowed them down.  He says that Code Reviews are usually used to address the symptom rather than the underlying problem which is not having good enough tests of the software, services and components.  He says this also applies to ensuring the system has the necessary amount of monitoring, auditing and metrics implemented within to ensure speedy diagnosis of problems.

Sam talks about how, in a distributed system of many micro-services, there can be a lot of data duplication.  One area of the system can deal with it’s own definition of a “customer”, whilst another area of the system deals with it’s own definition of that same “customer” data.  He says that businesses fear things like data duplication, but that it really shouldn't matter in a distributed system and it's often actually a good thing – this is frequently seen in systems that implement CQRS patterns, eventual consistency and correct separation of concerns and contexts due to implementation of DDD.  Sam states that, for him, code decoupling is vastly favourable to code duplication – if you have to duplicate some code in two different areas of the system in order to correctly provide a decoupled environment, then that’s perfectly acceptable and that introducing coupling simply to avoid code duplication is bad.

He further states that business monitoring (in production monitoring of the running application and infrastructure) is also favourable to acceptance tests.  Continual monitoring of the entire production system is the most useful set of metrics for the business, and that with metrics comes freedom.  You can improve the system only when you know what bits you can easily replace, and only when you know what bits actually need to be replaced for the right reasons (i.e. replacing one component due to low performance where the business monitoring has identified that this low-performance component is a genuine system bottleneck).   Specifically, business monitoring can provide great insights not just into the system’s performance but also the businesses performance and trends, too.  For example, monitoring can surface data such as spikes in usage.  From here we can implement alerts based upon known metrics – i.e. We know we get around X number of orders between 6pm and 10pm on a Friday night, if this number drops by Y%, then send an alert.

Sam talks about "EventStorming" (a phrase coined by Alberto Brandolini) with the business/domain experts.  He says he would get them all together in a room and talk about the various “events” and the various “commands” that exist within the business domain, but whilst avoiding any vocabulary that is technical. All language used is expressed within the context of the business domain. (i.e. Order comes in, Product is shipped etc).  He states that using Event Storming really helped to move the development of system forward and really helped to define both the correct boundaries of the domain contexts, and helped to define the functionality that each separate service within the system would provide.

Finally, Sam says the downside of moving to microservices are that it's a very time-consuming approach and can be very expensive (both in terms of financial cost and time cost) to define the system, the bounded contexts and the individual commands and events of the system.  Despite this, it’s a great approach and using it within his own work, Sam has found that it’s provided the developers within his company with a reliable, scalable and maintainable system, and most importantly it’s provided the business with a system that supports their business needs both now and into the future.

IMG_20150425_170540

After Sam’s session was over, we all reconvened in the common room and communal hall for the final section of the day.  This was the prize-draw and final wrap-up.

The organisers first thanked the very generous sponsors of the event as without them the event simply would not have happened.  Moreover, we wouldn’t have been anywhere nearly as well fed as we were! 

There were a number of prize draws, and the first batch was a prize from each of the in-house sponsors who had been exhibiting at the event.  Prizes here ranged from a ticket to the next NDC conference to a Raspberry Pi kit.

After the in-house sponsors had given away their individual prizes, there was a “main” prize draw based upon winners drawn randomly from the event feedback that was provided about each session by each conference attendee.  Amongst the prizes were iPad mini’s, a Nexus 9 tablet, technical books, laser pens and a myriad of software licenses.  I sat as the winner’s names were read out, watching as each person as called and the iPad’s and Nexus 9 were claimed by the first few people who were drawn as a winner.  Eventually, my own name was read out!  I was very happy and went up to the desk to claim my prize.  Unfortunately, the iPad’s and Nexus 9 were already gone, but I managed to get myself a license for PostSharp Ultimate!IMG_20150504_143116

After this, the day’s event was over.  There was a customary geek dinner that was to take place at a local Tapas restaurant later in the evening, however, I had a long drive home from Bristol back to the North-West ahead of me so I was unable to attend the geek dinner after-event.

So, my first DDD South-West was over, and I must say it was an excellent event.  Very well run and organised by the organisers and the staff of the venue and of course, made possible by the fantastic sponsors.  I’d had a really great day and I can’t wait for next year’s DDDSW event!

DDD North 2014 In Review

Outside the Entrance This past Saturday, 18th October 2014, saw another DDD (Developer, Developer, Developer) event.  This one was the 4th annual DDD North event, this year held at the University Of Leeds.

Communal Area After arriving and signing in, I proceeded through the corridors to the communal area where we were all greeted with a cup of coffee (or tea) and a nice Danish pastry!  It’s always a nice surprise to get a nice cake with your morning coffee, so although I wasn’t really hungry as I’d recently eaten a large breakfast, I decided that a Danish Pastry covered in sweet, sweet icing was too much of a temptation to be able to refuse!Danish Pastries  After this delightful breakfast, I headed down the corridor for the first of the day’s sessions.

The first session of the day is Liam Westley’sAn Actor’s Life For Me” which talks about parallel processing with multiple threads using the Task Parallel Library and utilising the Actor Model.  Liam introduces the Actor model and states it was first described by Carl Hewitt as early as 1973.  The dilemma we have for parallel processing is due to shared state, causing us to lock around areas of memory where multiple threads may try to access that state.  The Actor model solves this by not having shared state within the system, instead having each process take stateless data that is not shared and outputting stateless data to the next process in the processing pipeline.  Liam uses an analogy of making a cup of tea and the steps involved in that whilst also getting an itch that needs scratching whilst making that cup of tea.  The itch (and thus the scratch) can happen during any of the tea-making steps, thus increasing the combinations of how alternating between making tea and scratching can grow exponentially.Liam Westley's Actor Pattern

Liam talks about how CPU’s have been multi-threaded and multi-core for many years now, first arriving around the same time as .NET v1.0, whilst in the same time frame, our developer tools haven’t really kept up.  .NET 1.0 pretty much gave us raw access to how windows handles threads using the TheadPool, which meant managing multiple threads and sharing state between them was very difficult.  .NET 2.0 gave us a SynchronizationContext, but multi-threaded programming was still very hard.  Eventually, we got the much simplified Async & Await keywords, but now we have the Task Parallel Library which provides us with the Actor pattern.  This basically allows us to write our code in individual “blocks” which are essentially black boxes sharing no state with any other block.  We can then chain these blocks together into a processing pipeline, giving us the ability to perform some computational process without sharing state.

Liam then shows us a demo of a console application which produces an MD5 hash for a number of large files in a folder.  The first  iteration of the demo shows this happening without using the Task Parallel Library (TPL) and so performs no parallel processing and simply processes each file, one at a time on a single thread, taking some time to complete.  The second iteration Liam shows us uses the TPL, but still only works in a single-threaded manner by wrapping the hash calculation function as a TPL ActionBlock.  This iteration does the same as the single-threaded version, as again, no parallel processing is occurring.  The final iteration runs in a multi threaded manner by simply setting the block configuration (ExecutionDataFlowBlockOptions) property of MaxDegreeOfParallelism.  What’s really amazing about these ActionBlocks is that they inherently and implicitly handle all input and output buffering and queuing by themselves. This means we can add many blocks into the processing pipeline at a faster rate than they can be executed, and the TPL will handle the queuing for us.

20141018_095624 Liam next talks about separating the processing and calculating of the file hashes by performing these in a TransformBlock rather than an ActionBlock, and only using ActionBlocks to print the hash value to the UI.  The output of the TransformBlock (the hash value and the filename) is passed to the ActionBlock in the processing pipeline.

Liam then introduces the BufferBlock.  This acts as a propagator between other blocks and a FIFO queue of data.  Liam talks about how, in our example, we can add a BufferBlock in front of all of the TransformBlocks which will effectively evenly distribute the “load” as we provide the files to be hashed between the TransformBlocks. 

Next, Liam shows how we can use the LinkTo method which allows us to filter the passing of blocks along the processing pipeline, as the LinkTo method allows us to pass a predicate to perform the filtering.  This could be used (for example) to hash files of different types by different TransformBlocks (i.e. an MP3 file is processed differently than an MP4 file etc.).  Liam also introduces the TransformManyBlock which takes an IEnumerable of things to process.  This means we no longer have to have our own loop through each of the files to be processed, instead, we can simply pass in the contents of the folder’s files as a complete IEnumerable collection.

Finally, Liam mentions both the BroadcastBlock and the BatchBlock.  The Broadcast block is effectively a pub/sub mechanism as used in Message Buses etc. which allows fanning-out of the messages and broadcasting to other blocks.  The BatchBlock allows batching of inputs before passing the messages along the processing pipeline.

All in all, Liam’s talk was very informative and shows just how far we’ve come in our ability to relatively easily and simply perform parallel processing in a multi-threaded manner, taking advantage of all of the cores available to us on a modern day machine.  Liam’s demo code has been made available on GitHub for those interested in learning more.

 

20141018_110411 The next talk is Ian Cooper’sNot Just Layers! – What can pipelines and events do for you?”, which is a talk about Data Flow Architectures, and specifically Pipelines and Events.  Ian first talks about general software architecture and how processes evolve from basic application of a skill through to adoption of genuine craftsmanship and best-practices.  Software Architecture has many styles, but a single style can be explained as a series of component and connectors.  Components are the individual parts of an architecture that does something and the connectors are how multiple components talk to each other.

Ian states that Data Flow architectures are more driven by behaviour rather than state, and says that functional languages (such as F#) are better suited to behaviourally modelled architecture, whereas object oriented (OO) languages like C# are better suited to solve state driven processes and architectures.

Ian uses the KWIC (Keyword in context) algorithm, which is how Unix indexes text in its man pages, as the reference for the session.

Ian talks about pipes and filters, and states that it’s a flow of data processing along a pipeline of specific stages.  A push pipeline “pushes” tasks along the pipeline, the pipeline usually consisting of a pump at the front, which pushes data into the pipeline, with a series of filters which are the processing tasks and with each preceding filter responsible for pushing the data to the succeeding filter in the pipeline.  There’s also usually a sink at the end that provides the final end result.  There’s also Pull pipelines, of which .NET’s LINQ is an example, which have each filter further along the pipeline doing the pulling of the data from the previous filter, rather than the previous filter pushing the data on.

20141018_113104 Ian mentions how pipes and filters architecture is similar to a batch sequence architecture (See below for the subtle difference between them).  He talks about how errors that may happen in a long-running sequence that need the entire processing stream to be undo are better suited to a batch sequence architecture than a pipes and filter architecture, due to the more disconnected nature of the pipes and filter architecture.

Ian talks about parallel execution and the potential pub/sub problem of consumers awaiting data and not knowing when the entire workload is completed.  If individual steps are either faster or slower than the preceding or succeeding steps in the chain, this can cause problems with either no data, or too much data to process.  The solution to this problem is to introduce a “buffer” in between steps within the chain.  Such things as Message Queues (i.e. MSMQ, RabbitMQ etc) or in-memory caching mechanisms (such as those provided by tools like Redis) can offer this.

20141018_113427 Ian then show us an in-memory demo of a program using the pipes and filters architecture.  Ian states that, ideally, filters in a pipeline shouldn’t really know about other filters, but its okay for them to be aware of an abstraction of a new filter that’s next in the pipeline, but not the concrete instance of that filter.  Ian uses the KWIC algorithm for the demo code.  Ian shows the same demo using the manual pipeline and filters, and also a LINQ implementation.  The LINQ example has its filters implemented as fluent method calls simply chained together (i.e. TextLines.Shift(x=>x).RemoveNoise(x => x).Sort() etc.).  Ian then show the same example as written in F#.  This shows the pipeline, using F#’s pipeline operator “|>” is even simpler to see from the code that implements it.

Ian shows us the demo code using a message queue (using MSMQ behind the scenes), this shows a pull based pipeline where each filter down the chain pulls messages from a message queue to which messages are posted by the preceding filter in the pipeline chain.  Ian also shows us the pipeline running in a parallel manner, using the Task Parallel Library.  Each filter has distinct Inputs and Outputs defined as BlockingCollection<T> allowing the data to flow in and out, but to be blocked on the individual thread if the next filter in the pipeline isn’t ready to receive that data.

Finally, Ian talks about Batch Sequences and how they differ slightly from a pipes and filters architecture.  He talks about how you did Batch Sequencing many years ago with magnetic tapes being passed from one reel-to-reel processing machine to the next!  The main difference between Batch Sequence and Pipes and Filters is that in a batch sequence, each filter has to complete the entire workload of data before passing everything as output to the next filter in the chain.  By contrast, pipes and filters will have its filter only process one small piece of work or one individual piece of data before passing it down the processing chain.  This means that true pipes and filters is much better suited to being parallelized than a batch sequence architecture.

 

20141018_125418_LLS The next session is Richard Tasker’sBDD and why you should be doing it”.  Richard starts by introducing BDD (Behaviour Driven Development) and where it originated.  It was first proposed by Dan North as a “solution” to some of the failings of TDD such as: Where do you start with TDD? What to test and what not to test? and How much to test in one go?

Richard starts by talking about his first exposures to understanding BDD.  This started with writing expressive names for standard unit tests.  This helps understand what the test is testing and thus, what the code is doing.  I.e. the expression of a behaviour of the code.  It’s from here that we can see how we can make the mental leap from testing and exercising small methods of of code, but a more user-centric behaviour of the overall application.

Richard shows a series of Database Entity Relationship diagrams as the first mechanism he used to design an application used to model car parts and their relation to vehicles.  This had to go through a number of iterations to fully realise the entities involved and their relationships to each other and it wasn’t the most effective way to achieve the overall design.  Using a series of User Stories which could be turned into BDD tests was the way forward.

Richard next introduced the MoSCoW method as the way in which he started writing his BDD tests.  Using this method combined with the new style of user story templates emphasises the behaviour and business function.  Instead of writing “As a <type of user> I want <some functionality> so that <some benefit>”, we instead write, “In order to <achieve some value>, as a <type of user>, I should have <some functionality>”.  The last part of the user story gets the relevant must/should/could/won’t wording in order to help achieve effective prioritization with the customer.

Cynefin_as_of_1st_June_2014 Richard then introduces SpecFlow as his BDD tool of choice.  He shows a simple demo of a single SpecFlow acceptance test, backed by a number of standard unit tests.  Richard says that you probably don’t want to do this for every individual tiny part of your application as this can lead to an abundance of unit tests and further lead to a test maintenance burden.  To help solve this, Richard talks about Decision Frameworks, of which a popular one is called “Cynefin”.   It defines states of Obvious, Chaotic, Complex and Complicated.  Each area of the application and discrete pieces of functionality can be assessed to see which of the four Cynefin states they may fall into.  From here, we can decide how many or how few BDD Acceptance tests are best utilised for that feature to deliver the best return on investment.  Richard says that Acceptance tests are often best used in Complicated & Complex states, but are often less useful in Obvious & Chaotic states.

Richard closes his session with “why” we should be doing BDD.  He talks about many of the benefits of adopting BDD and says that it is a great helper for teams that are new to TDD.  Richard says that BDD helps to reduce communication barriers between the developers and other technical professionals and the perhaps less technical business stakeholders and that BDD also helps with prioritizing which features should be implemented before others.  BDD also helps with naming things and defining the specific behaviours of our application in a more user-oriented way and also helps to define the meaning of “done”. 

 

20141018_131051_LLS After Richard’s talk, it was lunchtime.  Lunch was served in the same communal area where we’d all gathered earlier at breakfast time and consisted to a rather nice sandwich, a bag of crisps and a drink.  It was nice that all three ingredients could be chosen by each individual attendee from a selection available.

20141018_131444_LLS After enjoying this very nice lunch, I decided to skip the Grok talks (these are short, 10 minute talks that generally happen over lunchtime at the various DDD conferences) and get some fresh air outside.  That didn’t last too long, as I found the Pack Horse pub just down the road from the area of the university used for the conference.  This is a pub belonging to a small local microbrewery called The Burley Street Brewhouse.  I decided I had to go in and sneak a cheeky pint of bitter as a lunchtime treat.  It was indeed a lovely pint and afterwards, I headed back to the university and to the DDD North conference.  I went back in via an entrance close to the communal area still housing some conference attendees and realised that a number of sandwiches and crisps were still available for any attendee that wanted 2nd helpings!  I was still a bit peckish after my liquid refreshment (and knowing that I wouldn’t be eating until quite late in the evening at the after conference Geek Dinner) I decided to go for seconds!  After enjoying my second helpings, I headed off for the first session of the afternoon.

 

20141018_143120_LLS The first afternoon session is Andrew MacDonald’sCQRS & Event Sourcing”.  Andrew first talks about the how & why of starting development in a brand new project.  Andrew has his own development project, treevue.com, for which he decided to try out CQRS and event sourcing as they were two new interesting techniques that Andrew believed could help with the development of his software.  treevue.com is a web product which offers virtual data rooms.  Andrew talks about the benefits of CQRS & Event sourcing such as allowing a truly abstracted data storage model, providing domain driven design without noise and that separating reads and writes to the data model via CQRS could open up new possibilities for the software.  Andrew states that it’s not appropriate for everything and quotes Udi Dahan who said that most people who have used CQRS shouldn’t have done so!

CQRS is Command Query Responsibility Segregation and allows commands (processes that alter our data) to be separate from and entirely distinct from Queries (processes that only read our data but don’t change it).  The models behind each of these can be entirely different, even when referring to the same domain entities, so a data model for reading (for example) a Customer type can have a different design when reading than when writing.

Architectures Compared_thumbAndrew talks about the overall architecture of a system that employs CQRS vs. one that doesn’t.  Without CQRS, reads and writes flow through the same layers of our application.  With CQRS, we can have entirely different architectures for reading vs. writing.  Usually the writing architecture is similar to the entire non-CQRS architecture, flowing through many layers including data access, validation layers etc., but often the reading architecture uses a much flatter set of layers to read the data as concerns such as validation are generally not required in this context.  The two separate reading and writing stacks can often even connect to separate databases which provide “eventual consistency” with each other.  This also means reading and writing can scale independently of each other, and given that many apps read far more than write, this can be invaluable.

image19 Andrew then introduces Event Sourcing which, whilst separate and different from CQRS, does play well with it.  Andrew shows a typical relational model of a purchase order with multiple purchase order line item types related to it and a separate shipping info type attached.  This model only allows us to see the state of the order and its data as it stands right now.  Event sourcing shows the timeline of events against the purchase order as each alteration to the entity is stored separately in an event queue/database.  i.e. A line item is added with an (incorrect) quantity of 4.  But corrected with a later event deducting 2 from the line item, leaving a line item with a correct quantity of 2.  This provides us with the ability to not only see how the data looks “right now”, but to be able to create the entire state of the entity model at any given point in time.

Andrew then proceeds to talk about Azure’s role in his treevue app and how he’s utilised Azure’s Table Storage as a first class citizen.  He then shows us a quick demo and some code using EventProcessors and CommandProcessors which effectively implement the CQRS pattern. 

Finally, Andrew shows how he uses something called a “snapshot” when reading domain aggregates, which is effectively a caching layer used to improve performance around building the domain aggregate models from the various events that make up a specific state of the model as at a certain point in time.  This is particularly important when running applications in the cloud and using such technology as Azure Table Storage, as this will only serve back a maximum of 1000 rows per query before you, as the developer, have to make further requests for more data.  Andrew points out that the demo code is available on GitHub for those interested in diving deeper and learning more from his own implementation.

 

20141018_154117_LLS The final session for today is David Whitney’sLessons Learnt running a public API”.  David is a freelance consultant who has worked for many companies writing large public API’s.  The company used for reference during David’s talk is the work he did with Just Giving.  David states how the project to build the Just Giving API grew so large that the API eventually became the company’s biggest revenue stream.

David’s talk is a fast-paced set of tips, tricks and lessons that he has personally learned over the many years working with clients developing their large public-facing API’s.

David starts with stating that your API is your public facing contract to the world, and that it will live or die by the strength of it’s documentation.  If it’s bad, people will write bad implementations, and you can’t blame them when that happens.  Documentation for APIs can either be created first, which then drives the design of the API, or it can be performed the other way around, where you write the API first and document it afterwards.  Either approach is viable, so long as documentation does indeed exist and is sufficiently comprehensive to allow your consumers to build quality implementations of your API.  David says it’s often best to host the docs with the API itself so that if you hit the API endpoint with a web browser as a human user, you’ll serve up the API documentation.

David states that the DTO’s returned from API calls should provide “examples” of themselves.  This is a simple mechanism that lets users “discover” your API and helps them to understand just how they should use it.  Code such as this:

public interface IProvideAnExampleOf<TMyself>
{
    ExampleOf<TMyself>[] BuildExample();
}

public class ExampleOf<T>
{
    public string Description { get; set; }
    public T Example { get; set; }

    public ExampleOf(string description, T example)
    {
        Description = description;
        Example = example;
    }
}

will enable your API to provide examples of itself to your users.  David states that anything you can do to help your API consumers will greatly cut down the inevitable avalanche of help requests that will hit you.

Following on from individual examples, it’s good to have your API and it’s documentation provide “recipes” for how to use large sections of your API and how to call discrete service endpoints in a coherent chain in order to achieve a specific outcome.  Recipes help your users to “fall into the pit of success”.  Providing things like a complete web application, ideally written in multiple languages, that exercises various parts of your API is even better.

David next talks about versioning of your API, and says that it’s something you have to ensure you have a policy on from Day 1.  Retrofitting versioning is very hard and often leads to broken or awkward implementations.  Adding version numbers to the URI is perhaps the easiest to achieve, but it’s not really the best approach.  It’s far better to add the API version in the HTTP header.

He continues by talking about modifying existing API calls.  Don’t.  Just don’t do it at any cost!  If you really must, you can add additional data to the return values of your API endpoints, but you must never change or remove anything that’s already there.  You must also never rename anything.  If you need to do any of this, use a new version.  This leads into Content Types, and here David states that you’ll really need to provide all the different content types that people will realistically use.  Whilst many web developers today see JSON as the de-facto standard, many companies – especially large enterprises – are still using XML as their de-facto standard.  Your API is going to have to support both.  David also mentions that JSONP is another, growing, standard that you may well have to support, but be careful if you do as you’ll need to be mindful of possible errors caused by CORS (Cross Origin Resource Sharing) which is the ability of resources such as JavaScript to be able to be called from domains other than the one where the resource is hosted.

David talks about the importance of making Statistics for your API available and public.  You need to ensure you’re gathering performance and other statistics on every method call.  One possibility is returning some statistics back to the consumer directly in the HTTP response header after every request to your API, such as the server name that serviced the consumer’s request.  This is especially useful if you’ve got a large server farm and need help debugging service call issues.  Also you should ensure you publically expose your statistics in a dashboard via status updates, uptime pages and more.  For one, it’ll help you deflect any criticisms that your performance is broken, and it’ll provide consumers with confidence that your API is up, that it stays up and that you’re on top of maintaining this.  (Unless, of course, your performance really is broken in which case that same fancy dashboard will help you have visibility into diagnosing and correcting the issue!).  David next mentions the importance of a good staging server for user testing.  Don’t simply expose an internal “test” server that you may have cobbled together.  David relates first hand experience of just how difficult it can be getting users to stop using your “test” server after you’ve allow them access!

20141018_162628_LLS The next part of the session focuses on the overall approach to design of your API.  David stresses that it’s good to go back and read the original documentation on RESTful architecture, written by Roy Fielding as a doctoral dissertation back in the year 2000.  Further, it’s important to lean on existing conventions – always return canonical URI’s rather than relative ones and always supply ID’s and URI’s when returning data that refers to any domain or service entity.  As well as ensuring you follow existing standards, it’s also important to investigate new, emerging standards too.  Standards such as HAL (Hypertext Application Language) and JSON API can ensure that should such standards quickly become mainstream, you can adapt your API to support them.

David continues his session by talking about the cardinal sins of API design.  First thing you must never do is this:

{
    "PageType": 1,
    "SomeText": "This is some text"
}

What, exactly, is PageType 1?  We’re talking, of course, about magic numbers.  Don’t do it.  This forces your consumers to go off and look it up in the documentation, and whilst that documentation should definitely exist, there’s no reason why you can’t provide a more meaningful value to your consumer.  You have to think like a consumer at all times and try to imagine the applications they’re going to build using your API.  Also, don’t ever ask a user for data that your API itself can’t supply – i.e. Don’t ever request some specific identifier for a resource if you don’t provide that identifier when returning that resource in other requests.  Build your services RESTfully, don’t build XML-RPC with SOAP envelopes.  Be resource oriented, and always ensure you use the correct HTTP verbs for all of your services actions – especially understand the difference between POST & PUT.

Make sure you understand multi-tenancy and how that will impact the design and implementation of your API.  Good load balancers and proxies can balance based on request headers, so it’s really easy and useful to provide multi-tenancy in this manner.  Also ensure you use a good sandbox environment for testing and don’t forget to implement good rate limiting!   Users and consumers will make mistakes in their code and you don’t want them to take down your service when they do.

David talks about error handling and says you should validate everything you can when requests are made to your API.  Try to return errors in batches if possible, and always make sure that error messages are useful and readable.  Similar the magic numbers above, don’t return only an arcane error code to your consumers and force them to have to cross reference it from deep within your documentation.

20141018_163740_LLS David moves onto authentication for your API and states that this is an area that can get a bit painful.  Basic HTTP Auth will get you going, and can be sufficient if your API is (and will remain) fairly small scale, however, if your API is large or likely to grow to a larger scale – and especially if your API will be used by users via third-parties, you’ll quickly grow out of Basic Auth and need something more robust.  He says that OpenAuth is the best worst alternative.  It provides good security but can be painful to implement.  Fortunately, there are many third-party providers out there to whom you can outsource your authorisation concerns.

David then discusses providing support for your API to your users.  He says the best approach is to simply put it all out there in the public domain.  This provides transparency which is a good thing, but can also encourage a “self-service” model where people within the community will start to help provide answers and solutions to other community members.  Something as simple as a Google Group or a tag on Stack Overflow can get you started.

David closes his session by stating that, as your API grows over time, always ensure that you’re never attempting to serve only a single customer.  Keep your API clean and generic and it will remain useful to all consumers, rather than compromising that usefulness for just a minority of users.  And finally, if your API is or will become a first-class product for your business, just as the Just Giving API became for them, make sure you have a full product team within your business to deal with its day to day operation and its ongoing maintenance and development.  It’s all too easy to think that the API isn’t strictly a “product” due to its highly technical and slightly opaque nature, however, doing so would be a mistake.

 

20141018_173357_LLS After David’s session, we all congregated in the main lecture theatre for the wrap up presentation from Andy Westgarth, one of the conference organisers.  This involved thanking the very generous sponsors of the event as without them there simply wouldn’t be a DDD conference, and it also involved a prize giving session – the prizes consisting of books, T-shirts, some Visual Studio headphones and a main prize of a Surface Pro 3!

After the excellent day, I headed to the pub which was very conveniently located immediately across the road from the venue entrance.  I had a few hours to kill until the Geek Dinner which was to be held later that evening at Pizza Express in Leed’s Corn Exchange.  I enjoyed a couple of pints of Leeds Pale Ale before heading off to the Pizza Express venue for my dinner.

20141018_224309_LLS The Geek Dinner was attended by approximately 40 people and a fantastic time was had by all.  I was sat close one of the day’s earlier speakers, Andrew MacDonald, and we had a good old chin wag about past projects, work, and life as a software developer in general.

Overall, the DDD North 2014 event and the Geek Dinner afterwards was a fantastic success, and a great time was had by all.  Andy promised that there’d be another one in 2015, which will be held back up in the North-East of England due to the alternating location of DDD North, so here’s looking forward to another wonderful DDD North conference in 2015.