DDD South West 2017 In Review

IMG_20170506_090757This past Saturday 6th May 2017, the seventh iteration of the DDD South West conference was held in Bristol, at the Redcliffe Sixth Form Centre.

This was my second DDD South West conference that I’d attended, having attended the prior DDD South West conference in 2015 (they’d had a year off in 2016), so returning the 2017, the DDD South West team had put on another fine DDD conference.

Since the conference is in Bristol, I’d stayed over the evening before in Gloucester in order to break up the long journey down to the south west.  After an early breakfast on the Saturday morning, I made the relatively short drive from Gloucester to Bristol in order to attend the conference.

IMG_20170506_085557Arriving at the Redcliffe centre, we were signed in and made our way to the main hall where we were treated to teas, coffees and danish pastries.  I’d remembered these from the previous DDDSW conference, and they were excellent and very much enjoyed by the arriving conference attendees.

After a short while in which I had another cup of coffee and the remaining attendees arrived at the conference, we had the brief introduction from the organisers, running through the list of sponsors who help to fund the event and some other house-keeping information.  Once this was done it was time for us to head off to the relevant room to attend our first session.  DDDSW 7 had 4 separate tracks, however, for 2 of the event slots they also had a 5th track which was the Sponsor track – this is where one of the technical people from their primary sponsor gives a talk on some of the tech that they’re using and these are often a great way to gain insight into how other people and companies do things.

I heading off down the narrow corridor to my first session.  This was Ian Cooper’s 12-Factor Apps In .NET.

IMG_20170506_092930Ian starts off my introducing the 12 factors, these are 12 common best practices for building web based applications, especially in the modern cloud-first world.  There’s a website for the 12 factors that details them.  These 12 factors and the website itself were first assembled by the team behind the Heroku cloud application platform as the set of best practices that most of the most successful applications running on their platform had.

Ian states how the 12 factors cover 3 broad categories: Design, Build & Release and Management.  We first look at getting setup with a new project when you’re a new developer on a team.  Setup automation should be declarative.  It should ideally take no more than 10 minutes for a new developer to go from a clean development machine to a working build of the project.  This is frequently not the case for most teams, but we should work to reduce the friction and the time it takes to get developers up and running with the codebase.  Ian also state that a single application, although it may consist of many different individual projects and “moving parts” should remain together in a single source code repository.  Although you’re keeping multiple distinct parts of the application together, you can still easily build and deploy the individual parts independently based upon individual .csproj files.

Next, we talk about application servers.  We should try to avoid application servers if possible.  What is an application server?  Well, it can be many things, but the most obvious one in the world of .NET web applications is Internet Information Server itself.  This is an application server and we often require this for our code to run within it.  IIS has a lot of configuration of it’s own and so this means that our code is then tied to and dependent upon that specifically configured application server.  An alternative to this is have our code self-host, which is entirely possible with ASP.NET Web API and ASP.NET Core.  By self-hosting and running our endpoint of a specific port to differentiate it from other services that may also potentially run on the same server, we ensure our code is far more portable, and platform agnostic.

Ian then talks about backing services.  These are the concerns such as databases, file systems etc. that our application will inevitably need to talk to.  But we need to ensure that we treat them as the potentially ephemeral systems that they are, and therefore accept that such a system may not even have a fixed location.  Using a service discovery service, such as Consul.io, is a good way to remove our application’s dependence on a a required service being in a specific place.

Ian mentions the port and adapters architecture (aka hexagonal architecture) for organising our codebase itself.  He likes this architecture as it’s not only a very clean way to separate concerns, and keep the domain at the heart of the application model, it also works well in the context of a “12-factor” compliant application as the terminology (specifically around the use of “ports”) is similar and consistent.  We then look at performance of our overall application.  Client requests to our front-end website should be responded to in around 200-300 milliseconds.  If, as part of that request, there’s some long-running process that needs to be performed, we should offload that processing to some background task or external service, which can update the web site “out-of-band” when the processing is complete, allowing the front-end website to render the initial page very quickly.

This leads on to talking about ensuring our services can start up very quickly, ideally no more than a second or two, and should shutdown gracefully also.  If we have slow startup, it’s usually because of the need to built complex state, so like the web front-end itself, we should offload that state building to a background task.  If our service absolutely needs that state before it can process incoming requests, we can buffer or queue early request that the service receives immediately after startup until our background initialization is complete.  As an aside, Ian mentions supervisorD as a handy tool to keep our services and processes alive.  One great thing about keeping our startup fast and lean, and our shutdown graceful, we essentially have elastic scaling with that service!

Ian now starts to show us some demo code that demonstrates many of the 12 best practices within the “12-factors”.  He uses the ToDoBackend website as a repository of sample code that frequently follows the 12-factor best practices.  Ian specifically points out the C# ASP.NET Core implementation of ToDoBackend as it was contributed by one of his colleagues.  The entire ToDoBackend website is a great place where many backend frameworks from many different languages and platforms can be compared and contrasted.

Ian talks about the ToDoBackend ASP.NET Core implementation and how it is built to use his own libraries Brighter and Darker.  These are open-source libraries implementing the command dispatcher/processor pattern which allow building applications compliant with the CQRS principle, effectively allowing a more decomposed and decoupled application.  MediatR is a similar library that can be used for the same purposes.  The Brighter library wraps the Polly library which provides essential functionality within a highly decomposed application around improved resilience and transient fault handling, so such things as retries, the circuit breaker pattern, timeouts and other patterns are implemented allowing you to ensure your application, whilst decomposed, remains robust to transient faults and errors – such as services becoming unavailable.

Looking back at the source code, we should ensure that we explicitly isolate and declare our application’s dependencies.  This means not relying on pre-installed frameworks or other library code that already exists on a given machine.  For .NET applications, this primarily means depending on only locally installed NuGet packages and specifically avoiding referencing anything installed in the GAC.  Other dependencies, such as application configuration, and especially configuration that differs from environment to environment – i.e. database connection strings for development, QA and production databases, should be kept within the environment itself rather than existing within the source code.  We often do this with web.config transformations although it’s better if our web.config merely references external environment variables with those variables being defined and maintained elsewhere. Ian mentions HashiCorp’s Vault project and the Spring Boot projects as ways in which you can maintain environment variables and other application “secrets”.  An added bonus of using such a setup is that your application secrets are never added to your source code meaning that, if your code is open source and available on something like GitHub, you won’t be exposing sensitive information to the world!

Finally, we turn to application logging.  Ian states how we should treat all of our application’s logs as event streams.  This means we can view our logs as a stream of aggregated, time-ordered events from the application and should flow continuously so long as the application is running.  Logs from different environments should be kept as similar as possible. If we do both of these things we can store our logs in a system such as Splunk, AWS CloudWatch Logs, LogEntries or some other log storage engine/database.  From here, we can easily manipulate and visualize our application’s behaviour from this log data using something like the ELK stack.

After a few quick Q&A’s, Ian’s session was over and it was time to head back to the main hall where refreshments awaited us.  After a nice break with some more coffee and a few nice biscuits, it was quickly time to head back through the corridors to the rooms where the sessions were held for the next session.  For this next session, I decided to attend the sponsors track, which was Ed Courtenay’s “Growing Configurable Software With Expression<T>

IMG_20170506_105002Ed’s session was a code heavy session which used a single simple application based around filtering a set of geographic data against a number of filters.  The session looked at how configurability of the filtering as well as performance could be improved with each successive iteration refactoring the application to using an ever more “expressive” implementation.  Ed starts by asking what we define as configurable software.  We all agree that software that can change it’s behaviour at runtime is a good definition, and Ed says that we can also think of configurable software as a way to build software from individual building blocks too.  We’re then introduced to the data that we’ll be using within the application, which is the geographic data that’s freely available from geonames.org.

From here, we dive straight into some code.  Ed shows us the IFilter interface that exposes a single method which provide a filter function:

using System;

namespace ExpressionDemo.Common
{
    public interface IFilter
    {
        Func<IGeoDataLocation, bool> GetFilterFunction();
    }
}

The implementation of the IFilter interface is fairly simple and combines a number of actual filter functions that filter the geographic data by various properties, country, classification, minimum population etc.

public Func<IGeoDataLocation, bool> GetFilterFunction()
{
    return location => FilterByCountryCode(location) && FilterByClassification(location)
    && FilterByPopulation(location) && FilterByModificationDate(location);
}

Each of the actual filtering functions is initially implemented as a simple loop, iterating over the set of allowed data (i.e. the allowed country codes) and testing the currently processed geographic data row against the allowed values to determine if the data should be kept (returned true from the filter function) or discarded (returning false):

private bool FilterByCountryCode(IGeoDataLocation location)
{
	if (!_configuration.CountryCodes.Any())
		return true;
	
	foreach (string countryCode in _configuration.CountryCodes) {
		if (location.CountryCode.Equals(countryCode, StringComparison.InvariantCultureIgnoreCase))
			return true;
	}
	return false;
}

Ed shows us his application with all of his filters implemented in such a way and we see the application running over a reduced geographic data set of approximately 10000 records.  We process all of the data in around 280ms, however Ed tells us that it's really a very naive implementation and with only a small improvement in the filter implementations, we can improve this.  From here, we look at the same filters, this time implemented as a Func<T, bool>:

private Func<IGeoDataLocation, bool> CountryCodesExpression()
{
	IEnumerable<string> countryCodes = _configuration.CountryCodes;

	string[] enumerable = countryCodes as string[] ?? countryCodes.ToArray();

	if (enumerable.Any())
		return location => enumerable
			.Any(s => location.CountryCode.Equals(s, StringComparison.InvariantCultureIgnoreCase));

	return location => true;
}

We're doing the exact same logic, but instead of iterating over the allow country codes list, we're being more declarative and simply returning a Func<> which performs the selection/filter for us.  All of the other filters are re-implemented this way, and Ed re-runs his application.  This time, we process the exact same data in around 250ms.  We're starting to see an improvement.  But we can do more.  Instead of returning Func<>'s we can go even further and return an Expression.

We pause looking at the code to discuss Expressions for a moment.  These are really "code-as-data".  This means that we can decompose our algorithm that performs some specific type of filtering and can then express that algorithm as a data structure or tree.  This is a really powerful way or expressing our algorithm and allows our application simply functionally "apply" our input data to the expression rather than having to iterate or loop over lists as we had in the very first implementation.  What this means for our algorithms is increased flexibility in the construction of the algorithm, but also increase performance.  Ed shows us the implementation filter using an Expression:

private Expression<Func<IGeoDataLocation, bool>> CountryCodesExpression()
{
	var countryCodes = _configuration.CountryCodes;

	var enumerable = countryCodes as string[] ?? countryCodes.ToArray();
	if (enumerable.Any())
		return enumerable.Select(CountryCodeExpression).OrElse();

	return location => true;
}

private static Expression<Func<IGeoDataLocation, bool>> CountryCodeExpression(string code)
{
	return location => location.CountryCode.Equals(code, StringComparison.InvariantCultureIgnoreCase);
}

Here, we've simply taken the Func<> implementation and effectively "wrapped" the Func<> in an Expression.  So long as our Func<> is simply a single expression statement, rather than a multi-statement function wrapped in curly brackets, we can trivially turn the Func<> into an

Expression:

public class Foo
{
	public string Name
	{
		get { return this.GetType().Name; }
	}
}

// Func<> implementation
Func<Foo,string> foofunc = foo => foo.Name;
Console.WriteLine(foofunc(new Foo()));

// Same Func<> as an Expression
Expression<Func<Foo,string>> fooexpr = foo => foo.Name;
var func = fooexpr.Compile();
Console.WriteLine(func(new Foo()));

Once again, Ed refactors all of his filters to use Expression functions and we again run the application.  And once again, we see an improvement in performance, this time processing the data in around 240ms.  But, of course, we can do even better!

The next iteration of the application has us again using Expressions, however, instead of merely wrapping the previously defined Func<> in an Expression, this time we're going to write the Expressions from scratch.  At this point, the code does admittedly become less readable as a result of this, however, as an academic exercise in how to construct our filtering using ever more lower-level algorithm implementations, it's very interesting.  Here's the country filter as a hand-rolled Expression:

private Expression CountryCodeExpression(ParameterExpression location, string code)
{
	return StringEqualsExpression(Expression.Property(location, "CountryCode"), Expression.Constant(code));
}

private Expression StringEqualsExpression(Expression expr1, Expression expr2)
{
	return Expression.Call(typeof (string).GetMethod("Equals",
		new[] { typeof (string), typeof (string), typeof (StringComparison) }), expr1, expr2,
		Expression.Constant(StringComparison.InvariantCultureIgnoreCase));
}

Here, we're calling in the Expression API and manually building up the expression tree with our algorithm broken down into operators and operands.  It's far less readable code, and it's highly unlikely you'd actually implement your algorithm in this way, however if you're optimizing for performance, you just might do this as once again we run the application and observe the results.  It is indeed slightly faster than the previous Expression function implementation, taking around 235ms.  Not a huge improvement over the previously implementation, but an improvement nonetheless.

Finally, Ed shows us the final implementation of the filters.  This is the Filter Implementation Bridge.  The code for this is quite complex and hairy so I've not reproduced it here.  It is, however, available in Ed's Github repository which contains the entire code base for the session.

The Filter Implementation Bridge involves building up our filters as Expressions once again, but this time we go one step further and write the resulting code out to a pre-compiled assembly.  This is a separate DLL file, written to disk, which contains a library of all of our filter implementations.  Our application code is then refactored to load the filter functions from the external DLL rather than expecting to find them within the same project.  Because the assembly is pre-compiled and JIT'ed, when we invoke the assembly's functions, we should see another performance improvement.  Sure enough, Ed runs the application after making the necessary changes to implement this and we do indeed see an improvement in performance.  This time processing the data set in around 210ms.

Ed says that although we've looked at performance and the improvements in performance with each successive refactor, the same refactorings have also introduced flexibility in how our filtering is implemented and composed.  By using Expressions, we can easily change the implementation at run-time.  If we have these expressions exported into external libraries/DLL's, we could fairly trivially decide to load a different DLL containing a different function implementation.  Ed uses the example of needing to calculate VAT and the differing ways in which that calculation might need to be implemented depending upon country.  By adhering to a common interface and implementing the algorithm as a generic Expression, we can gain both flexibility and performance in our application code.

After Ed’s session it time for another refreshment break.  We heading back through the corridors to the main hall for more teas, coffees and biscuits.  Again, with only enough time for a quick cup of coffee, it was time again to trek back to the session rooms for the 3rd and final session before lunch.  This one was Sandeep Singh’s “JavaScript Services: Building Single-Page Applications With ASP.NET Core”.

IMG_20170506_120854Sandeep starts his talk with an overview of where Single Page Applications (SPA’s) are currently.  There’s been an explosion of frameworks in the last 5+ years that SPA’s have been around, so there’s an overwhelming choice of JavaScript libraries to choose from for both the main framework (i.e. Angular, Aurelia, React, VueJS etc.) and for supporting JavaScript libraries and build tools.  Due to this, it can be difficult to get an initial project setup and having a good cohesion between the back-end server side code and the front-end client-side code can be challenging.  JavaScriptServices is a part of the ASP.NET Core project and aims to simplify and harmonize development for ASP.NET Core developers building SPA’s.  The project was started by Steve Sanderson, who was the initial creator of the KnockoutJS framework.

IMG_20170506_121721Sandeep talks about some of the difficulties with current SPA applications.  There’s often the slow initial site load whilst a lot of JavaScript is sent to the client browser for processing and eventual display of content, and due to the nature of the application being a single page and using hash-based routing for each page’s URL, and each page being loaded because of running some JavaScript, SEO (Search Engine Optimization) can be difficult.  JavaScriptServices attempts to improve this by combining a set of SPA templates along with some SPA Services, which contains WebPack middleware, server-side pre-rendering of client-side code as well as routing helpers and the ability to “hot swap” modules on the fly.

WebPack is the current major module bundler for JavaScript, allowing many disparate client side asset files, such as JavaScript, images, CSS/LeSS,Sass etc. to be pre-compiled/transpiled and combined in a bundle for efficient delivery to the client.  The WebPack middleware also contains tooling similar to dotnet watch, which is effectively a file system watcher and can rebuild, reload and re-render your web site at development time in response to real-time file changes.  The ability to “hot swap” entire sets of assets, which is effectively a whole part of your application, is a very handy development-time feature.  Sandeep quickly shows us a demo to explain the concept better.  We have a simple SPA page with a button, that when clicked will increment a counter by 1 and display that incrementing count on the page.  If we edit our Angular code which is called in response to the button click, we can change the “increment by 1” code to say “increment by 10” instead.  Normally, you’d have to reload your entire Angular application before this code change was available to the client browser and the built-in debugging tools there.  Now, however, using JavaScript services hot swappable functionality, we can edit the Angular code and see it in action immediately, even continuing the incrementing counter without it resetting back to it’s initial value of 0!

Sandeep continues by discussing another part of JavaScriptServices which is “Universal JavaScript”, also known as “Isomorphic JavaScript”.  This is JavaScript which is shared between the client and the server.  How does JavaScriptServices deal with JavaScript on the server-side?   It uses a .NET library called Microsoft.AspNetCore.NodeServices which is a managed code wrapper around NodeJS.  Using this functionality, the server can effectively run the JavaScript that is part of the client-side code, and send pre-rendered pages to the client.  This is a major factor in being able to improve the initial slow load time of traditional SPA applications.  Another amazing benefit of this functionality is the ability to run an SPA application in the browser even with JavaScript disabled!  Another part of the JavaScriptServices technology that enables this functionality is RoutingServices, which provides handling for static files, and a “fallback” route for when explicit routes aren’t declared.  This means that a request to something like http://myapp.com/orders/ would be rendered on the client-side via the client JavaScript framework code (i.e. Angular) after an AJAX request to retrieve the mark-up/data from the server, however, if client-side routing is unavailable to process such a route (in the case that JavaScript is disabled in the browser, for example) then the request is sent to the server so that the server may render the same page (perhaps requiring client-side JavaScript which is now processed on the server!) on the server before sending the result back to the client via the means of a standard HTTP(s) request/response cycle.

I must admit, I didn’t quite believe this, however, Sandeep quickly setup some more demo code and showed us the application working fine with a JavaScript enabled browser, whereby each page would indeed be rendered by the JavaScript of the client-side SPA.  He then disabled JavaScript in his browser and re-loaded the “home” page of the application and was able to demonstrate still being able to navigate around the pages of the demo application just as smoothly (due to the page’s having been rendered on the server and sent to the client without needing further JavaScript processing to render them).  This was truly mind blowing stuff.  Since the server-side NodeServices library wraps NodeJS on the server, we can write server-side C# code and call out to invoke functionality within any NodeJS package that might be installed.  This means that if have a NodeJS package that, for example, renders some HTML markup and converts it to a PDF, we can generate that PDF with the Node package from C# code, sending the resulting PDF file to the client browser via a standard ASP.NET mechanism.  Sandeep wraps up his talk by sharing some links to find out more about JavaScriptServices and he also provides us with a link to his GitHub repository for the demo code he’s used in his session.

IMG_20170506_130734After Sandeep’s talk, it was time to head back to the main conference area as it was time for lunch.  The food at DDDSW is usually something special, and this year was no exception.  Being in the South West of England, it’s obligatory to have some quintessentially South West food – Pasties!  A choice of Steak or Cheese and Onion pasty along with a packet of crisps, a chocolate bar and a piece of fruit and our lunches were quite a sizable portion of food.  After the attendees queued for the pasties and other treats, we all found a place to sit in the large communal area and ate our delicious lunch.

IMG_20170506_130906The lunch break at DDDSW was an hour and a half and usually has some lightning or grok talks that take place within one of the session rooms over the lunch break.  Due to the sixth form centre not being the largest of venues (some of the session rooms were very full during some sessions and could get quite hot) and the weather outside turning into being a rather nice day and keeping us all very warm indeed, I decided that it would be nice to take a walk around outside to grab some fresh air and also to help work off a bit of the ample lunch I’d just enjoyed.

I decided a brief walk around the grounds of the St. Mary Redcliffe church would be quite pleasant and so off I went, capturing a few photos along the way.  After walking around the grounds, I still had some time to kill so decided to pop down to the river front to wander along there.  After a few more minutes, I stumbled across a nice old fashioned pub and so decided that a cheeky half pint of ale would go down quite nice right about now!  I ordered my ale and sat down in the beer garden overlooking the river watching the world go by as I quaffed!  A lovely afternoon lunch break it was turning out to be.

IMG_20170506_134245-EFFECTSAfter a little while longer it was time to head back to the Redcliffe sixth form centre for the afternoon’s sessions.  I made my way back up the hill, past the church and back to the sixth form centre.  I grabbed a quick coffee in the main conference hall area before heading off down the corridor to the session room for the first afternoon session.  This one was Joel Hammond-Turner’s “Service Discovery With Consul.io for the .NET Developer

IMG_20170506_143139Joel’s session is about the Consul.io software used for service discovery.  Consul.io is a distributed application that allows applications running on a given machine, to query consul for registered services and find out where those services are located either on a local network or on the wider internet.  Consul is fully open source and cross-platform.  Joel tells us the Consul is a service registry – applications can call Consul’s API to register themselves as an available service at a known location – and it’s also service discovery – applications can run a local copy of Consul which will synchronize with other instances of Consul on other machines in the same network to ensure the registry is up-to-date and can query Consul for a given services’ location.  Consul also includes a built-in DNS Server, a Key/Value store and a distributed event engine.  Since Consul requires all of these features itself in order to operate, they are exposed to the users of Consul for their own use.  Joel mentions how his own company, Landmark, is using Consul as part of a large migration project aimed at migrating a 30 year old C++ legacy system to a new event-driven distributed system written in C# and using ASP.NET Core.

Joel talks about distributed systems and they’re often architected from a hardware or infrastructure perspective.  Often, due to each “layer” of the solution requiring multiple servers for failover and redundancy, you’ll need a load balancer to direct requests to this layer to a given server.  If you’ve got web, application and perhaps other layers, this often means load balancers at every layer.  This is not only potentially expensive – especially if your load balancers are separate physical hardware – it also complicates your infrastructure.  Using Consul can help to replace the majority of these load balancers since Consul itself acts as a way to discover a given service and can, when querying Consul’s built-in DNS server, randomize the order in which multiple instances of a given service are reported to a client.  In this way, requests can be routed and balanced between multiple instances of the required service.  This means that load balancing is reduced to a single load balancer at the “border” or entry point of the entire network infrastructure.

Joel proceeds by showing us some demo code.  He starts by downloading Consul.  Consul runs from the command line.  It has a handy built-in developer mode where nothing is persisted to disk, allowing for quick and easy evaluation of the software.  Joel say how Consul command line API is very similar to that of the source control software, Git, where the consul command acts as an introducer to other commands allowing consul to perform some specific function (i.e. consul agent –dev starts the consul agent service running in developer mode).

The great thing with Consul is that you’ll never need to know where on your network the consul service is itself.  It’s always running on localhost!  The idea is that you have Consul running on every machine on localhost.  This way, any application can always access Consul by looking on localhost on the specific Consul port (8300 by default).  Since Consul is distributed, every copy of Consul will synchronize with all others on the network ensuring that the data is shared and registered services are known by each and every running instance of Consul.  The built in DNS Server will provide your consuming app with the ability to get the dynamic IP of the service my using a DNS name of something like [myservicename].services.consul.  If you've got multiple servers for that services (i.e. 3 different IP's that can provide the service) registered, the built-in DNS service will automagically randomize the order in which those IP addresses are returned to DNS queries - automatic load-balancing for the service!

Consuming applications, perhaps written in C#, can use some simple code to both register as a service with Consul and also to query Consul in order to discover services:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory, IApplicationLifetime lifetime)
{
	loggerFactory.AddConsole(Configuration.GetSection("Logging"));
	loggerFactory.AddDebug();
	app.UseMvc();
	var serverAddressFeature = (IServerAddressesFeature)app.ServerFeatures.FirstOrDefault(f => f.Key ==typeof(IServerAddressesFeature)).Value;
	var serverAddress = new Uri(serverAddressFeature.Addresses.First());
	// Register service with consul
	var registration = new AgentServiceRegistration()
	{
		ID = $"webapi-{serverAddress.Port}",
		Name = "webapi",
		Address = $"{serverAddress.Scheme}://{serverAddress.Host}",
		Port = serverAddress.Port,
		Tags = new[] { "Flibble", "Wotsit", "Aardvark" },
		Checks = new AgentServiceCheck[] {new AgentCheckRegistration()
		{
			HTTP = $"{serverAddress.Scheme}://{serverAddress.Host}:{serverAddress.Port}/api/health/status",
			Notes = "Checks /health/status on localhost",
			Timeout = TimeSpan.FromSeconds(3),
			Interval = TimeSpan.FromSeconds(10)
		}}
	};
	var consulClient = app.ApplicationServices.GetRequiredService<IConsulClient>();
	consulClient.Agent.ServiceDeregister(registration.ID).Wait();
	consulClient.Agent.ServiceRegister(registration).Wait();
	lifetime.ApplicationStopping.Register(() =>
	{
		consulClient.Agent.ServiceDeregister(registration.ID).Wait();
	});
}

Consul’s services can be queried based upon an exact service name or can be queries based upon a tag (registered services can assign multiple arbitrary tags upon registration to themselves to aid discovery).

As well as service registration and discovery, Consul also provides service monitoring.  This means that Consul itself will monitor the health of your service so that should one of the instances of a given service become unhealthy or unavailable, Consul will prevent that service’s IP Address from being served up to consuming clients when queried.

Joel now shared with us some tips for using Consul from within a .NET application.  He says to be careful of finding Consul registered services with .NET’s built-in DNS resolver.  The reason for this is that .NET’s DNS Resolver is very heavily cached, and it may serve up stale DNS data that may not be up to date with the data inside Consul’s own DNS service.  Another thing to be aware of is that Consul’s Key/Value store will always store values as byte arrays.  This can sometimes be slightly awkward if we’re storing mostly strings inside, however, it’s trivial to write a wrapper to always convert from a byte array when querying the Key/Value store.  Finally, Joel tells us about a more advanced feature of Consul which is the “watches”.  Theses are effectively events that Consul will fire when Consul’s own data changes.  A good use for this would be to have some code that runs in response to one of these events that can re-write the border load balancer rules to provide you with a fully automatic means of keeping your network infrastructure up-to-date and discoverable.

In wrapping up, Joel shares a few links to his GitHub repositories for both his demo code used during his session and the slides.

IMG_20170506_154655After Joel’s talk it was time for another refreshment break.  This one being the only one of the afternoon was accompanied by another DDD South West tradition – cream teas!  These went down a storm with the conference attendees and there was so many of them that many people were able to have seconds.  There was also some additional fruit, crisps and chocolate bars left over from lunch time, which made this particular break quite gourmand.

After a few cups of coffee which were required to wash down the excellent cream teas and other snack treats, it was time to head back down the corridor to the session rooms for the final session of the day.  This one was Naeem Sarfraz’s “Layers, Abstractions & Spaghetti Code: Revisiting the Onion Architecture”.

Naeem’s talk was a retrospective of the last 10+ years of .NET software development.  Not only a retrospective of his own career, but he posits that it’s quite probably a retrospective of the last 10 years of many people’s careers.  Naeem starts by talking about standards.  He asks us to cast our minds back to when we started new jobs earlier in our career and that perhaps one of the first things that we had to do in starting in the job was to read page after page of “standards” documents for the new team that we’d joined.  Coding standards, architecture standards etc. How times have changed nowadays where the focus is less on onerous documentation but more of expressive code that is easier to understand as well as such practices as pair programming allowing new developers to get up to speed with a new codebase quickly without the need to read large volumes of documentation.

IMG_20170506_160127Naeem then moves on to show us some of the code that he himself wrote when he started his career around 10 years ago.  This is typical of a lot of code that many of us write when we first started developing, and has methods which are called in response to some UI event that has business logic, database access and web code (redirections etc.) all combined in the same method!

Naeem says he quickly realised that this code was not great and that he needed to start separating concerns.  At this time, it seemed that N-Tier became the new buzzword and so newer code written was compliant with an N-Tier architecture.  A UI layer that is decoupled from all other layers and only talks to the Business layer which in turn only talks to the Data layer.  However, this architecture was eventually found to be lacking too.

Naeem stops and asks us about how we decompose our applications.  He says that most of the time, we end up with technical designs taking priority over everything else.  We also often find that we can start to model our software by starting at the persistence (database) layer first, this frequently ends up bleeding into the other layers and affects the design of them.  This is great for us as techies, but it’s not the best approach for the business.  What we need to do is start to model our software on the business domain and leave technical designs for a later part of the modelling process.

Naeem shows us some more code from his past.  This one is slightly more separated, but still has user interface elements specifically designed so that rows of values from a grid of columns are designed in such a way that they can be directly mapped to a DataTable type, allowing easy interaction with the database.  This is another example of a data-centric approach to the entire application design.

IMG_20170506_162337We then move on to look at another way of designing our application architecture.  Naeem tells us how he discovered the 4+1 approach, which has us examining the software we’re building by looking at it from different perspectives.  This helps to provide a better, more balanced view of what we’re seeking to achieve with it’s development.

The next logical step along the architecture path is that of Domain-Driven Design.  This takes the approach of placing the core business domain, described with a ubiquitous language – which is always in the language of the business domain, not the language of any technical implementation – at the very heart of the entire software model.  One popular way of using domain-driven design (DDD) is to adopt an architecture called “Onion architecture”.  This is also known as “Hexagonal architecture” or “Ports and adapters architecture”. 

We look at some more code, this time code that is compliant with DDD.  However, on closer inspection, it really isn’t.  He says how this code adopts all the right names for a DDD compliant implementation, however, it’s only the DDD vernacular that’s been adopted and the code still has a very database-centric, table-per-class type of approach.  The danger here is that we can appear to be following DDD patterns, but we’re really just doing things like implementing excessive interfaces on all repository classes or implementing generic repositories without really separating and abstracting our code from the design of the underlying database.

Finally, we look at some more code, this time written by Greg Young and adopting a CQRS based approach.  The demo code is called SimplestPossibleThing and is available at GitHub.  This code demonstrates a much cleaner and abstracted approach to modelling the code.  By using a CQRS approach, reads and writes of data are separated also, with the code implementing Commands - which are responsible for writing data and Queries – which are responsible for reading data.  Finally, Naeem points us to a talk given by Jimmy Bogard which talks about architecting application in “slices not layers”.  Each feature is developed in isolation from others and the feature development includes a “full stack” of development (domain/business layer, persistence/database layer and user interface layers) isolated to that feature.

IMG_20170506_170157After Naeem’s session was over, it was time for all the attendees to gather back in the main conference hall for the final wrap-up by the organisers and the prize draw.  After thanking the various people involved in making the conference what it is (sponsors, volunteers, organisers etc.) it was time for the prize draw.  There were some good prizes up for grabs, but alas, I wasn’t to be a winner on this occasion.

Once again, the DDDSW conference proved to be a huge success.  I’d had a wonderful day and now just had the long drive home to look forward to.  I didn’t mind, however, as it had been well worth the trip.  Here’s hoping the organisers do it all all over again next year!

DDD South West 6 In Review

image(4) This past Saturday 25th April 2015 saw the 6th annual DDD South West event, this year being held at the Redcliffe Sixth Form Centre in image(2)Bristol.  This was my very first DDD South West event, having travelled south to the two DDD East Anglia events previously, but never to the south west for this one.

I’d travelled down south on the Friday evening before the event, staying in a Premier Inn in Gloucester.  This enabled me to only have a relatively short drive on the Saturday to get to Bristol and the DDD South West event.  After a restful night’s sleep in Gloucester, I started off on the journey to Bristol, arriving at one of the recommended car parks only a few minutes walk away from the DDDSW venue.

Upon arrival at the venue, I checked myself in and proceeded up the stairs to what is effectively the Sixth Form “common room”.  This was the main hall for the DDDSW event and where all the attendees would gather, have teas, coffees & snacks throughout the day.

image(7) Well, as is customary, the first order of business is breakfast!  Thanks to the generous sponsors of the event, we had ample amounts of tea, coffee and delicious danish pastries for our breakfast!  (Surprisingly, these delicious pastries managed to last through from breakfast to the first (and possibly second) tea-break of the day!)

image(10)Well, after breakfast there was a brief introduction from the organisers as to the day’s proceedings.  All sessions would be held in rooms  on the second floor of the building and all breaks, lunch and the final gathering for the customary prize draw would be held in the communal common room.  This year’s DDDSW had 4 main tracks of sessions with a further 5th track which was the main sponsors track.  This 5th track only had two sessions throughout the day whilst the other 4 had 5 sessions in each.

The first session of the day for me was “Why Service Oriented Architecture?” by Sean Farmar

image(9)Sean starts his talk by mentioning how "small monoliths" of applications can, over time and after many tweaks to functionality, become large monoliths and can become a maintenance nightmare which is both a high risk to the business and can lead to changes that are difficult to make and can have unforeseen side-effects.  When we’ve created a large monolith of an application, we’re frequently left with a “big ball of mud”.

Sean talks about one of his first websites that he created back in the early 1990’s.   It had around 5000 users, which by the standards of the day was a large number.  Both the internet and the web have grown exponentially since then, so 5000 users is very small by today’s standards.  Sean states that we can take those numbers and “add two noughts to the end” to get a figure for a large number of users today.  Due to this scaling of the user base, our application needs to scale too, but if we start on the path of creating that big ball of mud, we’ll simply create it far quicker today than we’ve ever done in the past.

Sean continues to state that after we learn from our mistakes with the monolithic big ball of mud, we usually move to web services.  We break a large monolith into much smaller monoliths, however, these webservices need to then talk both to each other as well to the consumers of the webservice. For example, the sales webservice has to talk to the user webservice which then possibly has to talk to the credit webservice in order to verify that a certain user can place an order of a specific size.  However, this creates dependencies between the various web services and each service becomes coupled in some way to one or more other services.  This coupling is a bad thing which prevents the individual web services from being able to exist and operate without the other webservices upon which it depends.

From here, we often look to move towards a Service Oriented Architecture (SOA).  SOA’s core tenets are geared around reducing this coupling between our services.

Sean mentions the issues with coupling:

Afferent (dependents) & Efferent (depends on) – These are the things that a given service depends upon and the other services that, in turn, depend upon the first service.
Temporal (time, RPC) – This is mostly seen in synchronous communications – like when a service performs a remote procedure call (RPC) to another service and has to wait for the response.  The time taken to deliver the response is temporal coupling of those services.
Spatial (deployment, endpoint address) – Sean explains this by talking about having numerous copies of (for example) a database connection string in many places.  A change to the database connection string can cause redeployments of complete parts of the system.

After looking at the problems with coupling, Sean moves on to looking at some solutions for coupling:  If we use XML (or even JSON) over the wire, along with XSD (or JSON Schema) we can define our messages and the transport of our messages using industry standards allowing full interoperability.  To overcome the temporal coupling problems, we should use a publisher/subscriber (pub/sub) communication mechanism.  Publishers do not need to know the exact receivers of a message, it’s the subscribers responsibility to listen and respond to messages that it is interested in when the publisher publishes the message.  To overcome the spatial issues, we can most often use a central message queue or service bus.  This allows publishers and subscribers to communicate with each other without hard references to the location of the publisher or subscriber on the network, they both only need to communicate to the single message bus endpoint.  This frees our application code from ever knowing who (or where) we  are “talking to” when sending a command or event message to some other service within the system, pushing these issues down to being an infrastructure rather than an application level concern.  Usage of a message bus also gives us durability (persistence) of our messages meaning that even if a service is down and unavailable when a particular event is raised, the service can still receive and respond to the event when it becomes available again at a later time. 

arch Sean then shows us a diagram of a  typical n-tier architecture system.  He mentions how “wide” the diagram is and how each “layer” of the application spans the full extent of that part of the system (i.e. the UI later is a complete layer than contains all of the UI for the entire system).  All of these wide horizontal layers are dependent upon the layer above or beneath it.

Within a SOA architecture, we attempt to take this n-tier design and “slice” the diagram vertically.  Therefore each of our smaller services each contain all of the layers - a service endpoint, business logic, data access layer and database - each in thin, focused vertical slices for specific focused areas of functionality.

arch2 Sean remarks that if we're going to build this kind of system, or modify an existing n-tier system into these vertical slices of services, we must start at the database layer and separate that out.  Databases have their own transactions, which in a large monolithic DB can lock the whole DB, locking up the entire system.  This must be avoided at all costs.

Sean continues to talk about how our services should be designed.  Our services should be very granular.  i.e. we shouldn't have an "UpdateUser" method that performs creation and updates of all kinds of properties of a "User" entity, we should have separate "CreateUser", "UpdateUserPassword", "UpdateUserPhoneNumber" methods instead.  The reason is that, during maintenance, constantly extending an "UpdateUser" method will force it to take more and more arguments and parameters and will grow extensively in lines of code as it tries to handle more and more properties of a “user” entity and it thus become unwieldy.  A simpler "UpdateUserPassword" is sufficiently granular enough that it'll probably never need to change over its lifetime and will only ever require 1 or 2 arguments/parameters to the method. 

Sean then asks how many arguments our methods should take.  He says his own rule of thumb for maximum arguments to any method is 2.  Once you find yourself needing 3 arguments, it's time to re-think and break up the method and create another new one.   By slicing the system vertically we do end up with many many methods, however, each of these methods are very small, very simple and are very specific with individual specific concerns.

Next we look at synchronous vs asynchronous calls.  Remote procedure calls (RPC) will usually block and wait as one service waits for a reply from another.  This won’t scale in production to millions of users.  We should use the pub/sub mechanism which allows for asynchronous messaging allowing services that require data from other services to not have to wait and block while the other service provides the data, it can subscribe to a message queue and be notified of the data when it's ready and available.

Sean goes on to indicate that things like a user’s address can be used by many services, however, it’s all about the context in which that piece of data is used by that service.  For this reason it’s ok for our system to have many different representations of, effectively, the same piece of data.  For example, to an accounting service, a user’s address is merely a string that gets printed onto a letter or an invoice and it has no further meaning beyond that.  However, to a shipping service, the user’s address can and probably will affect things like delivery timescales and shipping costs.

Sean ends his talk by explaining that, whilst a piece of data can be represented in different ways by different parts of the system, only one service ever has control to write that data whereas all other services that may need that data in their own representation will only ever be read-only.

 

image (15) The next session was Richard Dalton’s “Burnout”.  This was a fascinating session and is quite an unusual talk to have at these DDD events, albeit a very important talk to have, IMHO.  Richard’s session was not about a new technology or method of improving our software development techniques as many of the other sessions at the various DDD events are, but rather this session was about the “slump” that many programmers, especially programmers of a certain age, can often feel.  We call this “burnout”.

Richard started by pointing out that developer “burnout” isn’t a sudden “crash-and-burn” explosion that suddenly happens to us, but rather it’s more akin to a candle - a slow burn that gradually, but inevitably, burns away.  Richard wanted to talk about how burnout affected him and how it can affect all of us, and importantly, what can we do to overcome the burnout if and when it happens to us.  His talk is about “keeping the fire alive” – that motivation that gets you up in the morning and puts a spring in your step to get to work, get productive and achieve things.

Richard starts by briefly running through the agenda of his talk.  He says he’ll talk about the feelings of being a bad programmer, and the “slump” that you can feel within your career, he’ll talk about both the symptoms and causes of burnout, discuss our expectations versus the reality of being a software developer along with some anti-patterns and actions.

We’re shown a slide of some quite shocking statistics regarding the attrition rate of programmers.  Computer Science graduates were surveyed to see who was still working as a programmer after a certain length of time.  After 6 years, the amount of CS graduates still working as a programmer is 57%, however after 20 years, this number is only 19%.  It’s clear that the realistic average lifespan of a programmer is perhaps only around 20-30 years.

Richard continues by stating that there’s really no such thing as a “computer programmer” anymore – there no longer a job titled as such.  We’re all “software developers” these days and whilst that obviously entails programming of computers, it also entails far more tasks and responsibilities.  Richard talks about how his own burnout started and he first felt it was at least partially caused by his job and his then current employer.  Although a good and generous employer, they were one of many companies who claimed to be agile, but really only did enough to be able to use the term without really becoming truly agile.  He left this company to move to one that really did fully embrace the agile mantra however due to lots of long-standing technical debt issues, agile didn’t really seem to be working for them.  Clearly, the first job was not the cause (or at least not the only cause) of Richard’s burnout.  He says how every day was a bad day, so much so that he could specifically remember the good days as they were so rare and few and far between.

He felt his work had become both Dull and Overwhelming.  This is where the work you do is entirely unexciting with very little sense of accomplishment once performed, but also very overwhelming which was often manifested by taking far longer to accomplish some relatively simple task than should really have been taken, often due to “artificial complexity”.  Artificial complexity is the complexity that is not inherent within the system itself, but rather the complexity added by taking shortcuts in the software design in the name of expediency.  This accrues technical debt, which if not paid off quickly enough, leads to an unwieldy system which is difficult to maintain.  Richard also states how from this, he felt that he simply couldn’t make a difference.  His work seemed almost irrelevant in the grand scheme of things and this leads to frustration and procrastination.  This all eventually leads to feelings of self-doubt.

Richard continues talking about his own career and it was at this point he moved to Florida in the US where he worked for 1.5 years.  This was a massive change, but didn’t really address the burnout and when Richard returned he felt as though the entire industry had moved on significantly in those 1.5 years when he was away, whilst he himself had remained where he was before he went.  Richard wondered why he felt like this.  The industry had indeed changed in that time and it’s important to know that our industry does change at a very rapid pace.  Can we handle that pace of change?  Many more developers were turning to the internet and producing blogs of their own and the explosion of quality content for software developers to learn from was staggering.  Richard remarks that in a way, we all felt cleverer after reading these blogs full of useful knowledge and information, but we all feel more stupid as we feel that others know far more than we do.  What we need to remember is that we’re reading the blogs showing the “best” of each developer, not the worst.

We move on to actually discuss “What is burnout?”  Richard states that it really all starts with stress.  This stress is often caused by the expectation vs. reality gap – what we feel we should know vs. what we feel we actually do know.  Stress then leads to a cognitive decline.  The cognitive decline leads to work decline, which then causes further stress.  This becomes a vicious circle feeding upon itself, and this all starts long before we really consider that we may becoming burnt out.  It can manifest itself as a feeling of being trapped, particularly within our jobs and this leads itself onto feeling fatigued.  From here we can become irritable, start to feel self-doubt and become self-critical.  This can also lead to feeling overly negative and assuming that things just won’t work even when trying to work at them.  Richard uses a phrase that he felt within his own slump - “On good days he thought about changing jobs.  On bad days he thought about changing career”!  Richard continues by stating that often the Number 1 symptom of not having burnout is thinking that you do indeed have it.  If you think you’re suffering from burnout, you probably aren’t but when you do have it, you’ll know.

Now we’re moving on to look at what actually leads to burnout?  This often starts with a set of unclear expectations, both in our work life, but in our general life as a software developer.  It can also come from having too many responsibilities, sleep and relaxation issues and a feeling of sustained pressure.  This often all occurs within the overarching feelings of a weight of expectation versus the reality of what can be achieved.

Richard states that it was this raised expectation of the industry itself (witness the emergence of agile development practices, test-driven development practices and a general maturing of many companies’ development processes and practices in a fairly short space of time) and the disconnect with reality, which Richard felt simply didn’t live up to the expectations that ultimately lead to him feeling a great amount of stress.  For Richard, it was specifically around what he felt was a “bad” implementation of agile software development which actually created more pressure and artificial work stress.  The implementation of a new development practice that is supposed to improve productivity naturally raises expectations, but when it goes wrong, it can widen the gap between expectation and reality causing ever more stress.  He does further mention that this trigger for his own feelings of stress may or may not be what could cause stress in others.

Richard talks about some of the things that we do as software developers that can often contribute to the feelings of burnout or of increasing stress.  He discusses how software frameworks – for example the recent explosion of JavaScript frameworks – can lead to an overwhelming amount of choice.  Too much choice then often leads to paralysis and Richard shares a link to an interesting video of a TED talk that confirms this.  We then move on to discuss software side projects.  They’re something that many developers have, but if you’re using a side-project as a means to gain fulfilment when that fulfilment is lacking within your work or professional life, it’s often a false solution.  Using side-projects as a means to try out and learn a new technology is great, but they won’t fix underlying fulfilment issues within work.  Taking a break from software development is great, however, it’s often only a short-term fix.  Like a candle, if there’s plenty of wax left you can extinguish the candle then re-light it later, however, if the candle has burned to nothing, you can’t simply re-ignite the flame.  In this case, the short break won’t really help the underlying problem.

Richard proceeds to the final section of his talk and asks “what can we do to combat burnout?”  He suggests we must first “keep calm and lower our expectations!”.  This doesn’t mean giving up, it means continuing to desire the professionalism within both ourselves and the industry around us, but acknowledging and appreciating the gap that exists between expectation and reality.  He suggests we should do less and sleep more. Taking more breaks away from the world of software development and simply “switching off” more often can help recharge those batteries and we’ll come back feeling a lot better about ourselves and our work.  If you do have side-projects, make it just one.  Many side-projects is often as a result of starting many things but finishing none.  Starting only one thing and seeing it through to the finish is a far better proposition and provides for a far greater sense of accomplishment.  Finally, we look at how we can deal with procrastination.  Richard suggests one of the best ways to overcome it in work is to pair program.

Finally, Richard states that there’s no shame in burnout.  Lots of people suffer from it even if they don’t call it burnout, whenever you have that “slump” of productivity it can be a sign that it’s time to do something about it.  Ultimately, though, we each have to find our own way through it and do what works for us to overcome it.

 

image (19) The final talk before lunch was on the sponsor’s track, and was “Native Cross-Platform mobile apps with C# & Xamarin.Forms” by Peter Major.  Peter first states his agenda with this talk and that it’s all about Xamarin, Xamarin.Forms and what they both can and can’t do and also when you should use one over the other.

Peter starts by indicating that building mobile apps today is usually split between taking a purely “native” approach – where we code specifically for the target platform and often need multiple teams of developers for each platform we’ll be supporting – versus a “hybrid” approach which often involves using technologies like HTML5 and JavaScript to build a cross-platform application which is then deployed to each specific platform via the use of a “container” (i.e. using tools like phonegap or Apache’s Cordova).

Peter continues by looking at what Xamarin is and what is can do for us.  Xamarin allows us to build mobile applications targeting multiple platforms (iOS, Android, Windows Phone) using C# as the language.  We can leverage virtually all of the .NET or Mono framework to accomplish this.  Xamarin provides “compiled-to-native” code for our target platforms and also provides a native UI for our target platforms too, meaning that the user interface must be designed and implemented using the standard and native design paradigms for each target platform.

Peter then talks about what Xamarin isn’t.  It’s not a write-once, run-anywhere UI, and it’s not a replacement for learning about how to design effective UI’s for each of the various target platforms.  You’ll still need to know the intricacies for each platform that you’re developing for.

Peter looks at Xamarin.iOS.   He states that it’s AOT (Ahead-Of-Time) compiled to an ARM assembly.  Our C# source code is pre-compiled to IL which in turn is compiled to a native ARM assembly which contains the MONO framework embedded within it.  This allows us as developers to use virtually the full extent of the .NET / Mono framework.  Peter then looks at Xamarin.Android.  This is slightly different to Xamarin.iOS as it’s still compiled to IL code, but then the IL code is JIT (Just-In-Time) compiled inside of a MONO Virtual Machine within the Android application.  It doesn’t run natively inside the Dalvik runtime on Android.  Finally, Peter looks at Xamarin.WindowsPhone.  This is perhaps the simplest to understand as the C# code is compiled to IL and this IL can run (in a Just-In-Time manner) directly against the Windows Phone’s own runtime.

Peter then looks at whether we can use our favourite SDK’s and NuGet Packages in our mobile apps.  Generally, the answer is yes.  SDK’s such as Amazon’s Kinesis for example are fully usable, but NuGet packages need to target PCL’s (Portable Class Libraries) if they’re to be used.

Peter asks whether applications built with Xamarin run slower than pure native apps, and the answer is that they generally run at around the same speed.  Peter shows some statistics around this however, he does also state that the app will certainly be larger in size than a natively written app.  Peter indicates, though, that Xamarin does have a linker and so it will build your app with a cut-down version of the Mono Framework so that it’ll only include those parts of the framework that you’re actually using.

We can use pretty much all C# code and target virtually all of the .NET framework’s classes when using Xamarin with the exception of any dynamic code, so we can’t target the dynamic language runtime or use the dynamic keyword within our code.  Because of this, usage of certain standard .NET frameworks such as WCF (Windows Communication Foundation) should be done very carefully as there can often be dynamic types used behind the scenes.

Peter then moves on to talk about the next evolution with Xamarin, Xamarin.Forms.  We’re told that Xamarin.Forms is effectively an abstraction layer over the disparate UI’s for the various platforms (iOS, Android, Windows Phone).  Without Xamarin.Forms, the UI of our application needs to be designed and developed to be specific for each platform that we’re targeting, even if the application code can be shared, but with Xamarin.Forms the amount of platform specific UI code is massively reduced.  It’s important to note that the UI is not completely abstracted away, there's still some amount of specific code per platform, but it's a lot less than when using "standard" Xamarin without Xamarin.Forms.

Developing with Xamarin.Forms is very similar to developing a WPF (Windows Presentation Foundation) application.  XAML is used for the UI mark-up, and the premise is that it allows the developer to develop by feature and not by platform.  Similarly to WPF, the UI can be built up using code as well as XAML mark-up, for example:

Content = new StackPanel().AddChildren(new Button() { Content = "Normal" });

Xamarin.Forms works by taking our mark-up that defines the placement of Xamarin.Forms specific “controls” and user interface elements and converting them using a platform-specific “renderer” to a native platform control.  By default, using the standard build-in renderers means that our apps won’t necessarily “look" like the native apps you’d find on the platform.  You can customize specific UI elements (i.e. a button control) for all platforms, or you can make the customisation platform specific.  This is achieved with a custom Renderer class that inherits from the EntryRenderer and adds the required customisations that are specific to the platform that is being targeted.

Peter continues to tell us that Xamarin.Forms apps are best developed using the MVVM pattern.  MVVM is Model-View-ViewModel and allows a good separation of concerns when developing applications, keeping the application code separate from the user interface code.  This mirrors the best-practice for development of WPF applications.  Peter also highlights the fact that most of the built-in controls will provide two-way data binding right out of the box.  Xamarin.Forms has "attached properties" and triggers.  You can "watch" a specific property on a UI element and in response to changes to the property, you can alter other properties on other UI elements.  This provides a nice and clean way to effectively achieve the same functionality as the much old (and more verbose) INotifyPropertyChanged event pattern provides.

Peter proceeds to talk about how he performs testing of his Xamarin and Xamarin.Forms apps.  He says he doesn’t do much unit testing, but performs extensive behavioural testing of the complete application instead.  For this, he recommends using Xamarin’s own Calabash framework for this.

Peter continues by explaining how Xamarin.Forms mark-up contains built-in simple behaviours so, for example, you can check a textbox's input is numeric without needing to write your own code-behind methods to perform this functionality.  It can be as simple as using mark-up similar to this:

<Entry Placeholder="Sample">
  <Entry.Behaviors>
    <Entry.NumericTextboxBehaviour>
  </Entry.Behaviors>
</Entry>

Peter remarks about speed of Xamarin.Forms developed apps and concludes that they are definitely slower than either native apps or even normal Xamarin developed apps.  This is, unfortunately, the trade-off for the improved productivity in development.

Finally, Peter concludes his talk by summarising his views on Xamarin.Forms.  The good:  One UI Layout and very customizable although this customization does come with a fair amount of initial investment to get platform-specific customisations looking good.  The bad:  Xamarin.Forms does still contain some bugs which can be a development burden.  There’s no XAML “designer” like there is for WPF apps – it all has to be written in a basic mark-up editor. Peter also states how the built-in Xamarin.Forms renderers can contain some internal code that is difficult to override, thus limiting the level of customization in certain circumstances.  Finally, he states that Xamarin.Forms is not open source, which could be a deciding factor for adoption by some developers.

 

IMG_20150425_131838 After Peter’s talk it was time for lunch!  Lunch at DDDSW was very fitting for the location in which we were in, the South-West of England.  As a result, lunch consisted of a rather large pasty of which we could choose between Steak or Cheese & Onion varieties, along with a packet of crisps, and a piece of fruit (a choice of apples, bananas or oranges) along with more tea and coffee!  I must say, this was a very nice touch – especially having some substantial hot food and certainly made a difference from a lot of the food that is usually served for lunch at the various DDD events (which is generally a sandwich with no hot food options available).

IMG_20150425_131849 After scoffing my way through the large pasty, my crisps and the fruit – after which I was suitably satiated – I popped outside the building to make a quick phone call and enjoy some of the now pleasant and sunny weather that had overcome Bristol.

IMG_20150425_131954 After a pleasant stroll around outdoors during which I was able to work off at least a few of the calories I’d just consumed, I headed back towards the Redcliffe Sixth Form Centre for the two remaining sessions of the afternoon.

I headed back inside and headed up the stairs to the session rooms to find the next session.  This one, similar to the first of the morning was all about Service Oriented Architecture and designing distributed applications.

image (1) So the first of the afternoon’s sessions was “Introduction to NServiceBus and the Particular Platform” by Mauro Servienti.  Mauro’s talk was to be an introduction to designing and building distributed applications with a SOA (Service Oriented Architecture) and how we can use a platform like NServiceBus to easily enable that architecture.

Mauro first starts with his agenda for the talk.  He’ll explain what SOA is all about, then he’ll move on to discuss long running workflows in a distributed system and how state can be used within.  Finally, he’ll look at asynchronous monitoring of asynchronous processes for those times when something may go wrong and allow us to see where and when it did.

Mauro starts by explaining the core tenets of NServiceBus.  Within NServiceBus, all boundaries are explicit.  Services are constrained and cannot share things between them.  Services can share schema and a contract but never classes.  Services are also autonomous, and service compatibility is based solely upon policy.

NServiceBus is built around messages.  Messages are divided into two types, commands and events. Each messages is an atomic piece of information and is used to drive the system forward in some way.  Commands are imperative messages and are directed to a well known receiver.  The receiver is expected (but not compelled) to act upon the command.  Events are also messages that are an immutable representation of something that has already happened.  They are directed to anyone that is interested.  Commands and events are messages with a semantic meaning and NServiceBus enforces the semantic of commands and events - it prevents trying to broadcast a Command message to many different, possibly unknown, subscribers and enforces this kind of “fire-and-forget” publishing only to Event messages.

We’re told about the two major messaging patterns.  The first is request and response.  Within the request/response pattern, a message is sent to a known destination - the sender knows the receiver perfectly but the receiver doesn't necessarily know the sender.  Here, there is coupling between the sender and the receiver.  The other major message pattern is publish and subscribe (commonly referred to as pub/sub).  This pattern has constituent parts of the system become “actors”, and each “actor” in the system can act on some message that is received.  Command messages are created and every command also raises an event message to indicate that the command was requested.  These event messages are published and subscribers to the event can subscribe and receive these events without having to be known to the command generator.  Events are  broadcast to anyone interested and subscribers can subscribe, listen and act on the event, or not act on the event.  Within a pub/sub system, there is much less coupling between the system’s constituent parts, and the little coupling that exists is inverted, that is, the subscriber knows where the publish is, not the other way round.

In a pub/sub pattern, versioning is the responsibility of the publisher.  The publisher can publish multiple versions of the same event each time an event is published.  This means that we can have numerous subscribers, each of which can be listening for, and acting upon different versions of the same event message.  As a developer using NServiceBus, your job is primarily to write message handlers to handle the various messages passing around the system.  Handlers must be stateless.  This helps scalability as well as concurrency.  Handlers live inside an “Endpoint” and are hosted somewhere within the system.  Handlers are grouped by "services" which is a logical concept within the business domain (i.e. shipping, accounting etc.).  Services are hosted within Endpoints, and Endpoint instances run on a Windows machine, usually as a Windows Service.

NServiceBus messages are simply classes.  They must be serializable to be sent over the wire.  NServiceBus messages are generally stored and processed within memory, but can be made durable so that if a subscriber fails and is unavailable (for example, the machine has crashed or gone down) these messages can be retrieved from persistent storage once the machine is back online.

NServiceBus message handlers are also simply classes, which implement the IHandleMessages generic interface like so:

public class MyMessageHandler : IHandleMessages<MyMessage>
{
}

So here we have a class defined to handle messages implemented by the class MyMessage.

NServiceBus endpoints are defined within either the app.config or the web.config files within the solution:

<UnicastBusConfig>
  <MessageEndpointMappings>
    <add Assembly="MyMessages" Endpoint="MyMessagesEndpoint" />
  </MessageEndpointMappings>
</UnicastBusConfig>

Such configuration settings are only required on the Sender of the message.  There is no need to configure anything on the message receiver.

NServiceBus has a BusConfiguration class.  You use it to define which messages are defined as commands and which are defined as events.  This is easily performed with code such as the following:

var cfg = new BusConfiguration();

cfg.UsePersistence<InMemoryPersistence>();
cfg.Conventions()
    .DefiningCommandsAs( t => t.Namespace != null && t.Namespace.EndsWith( ".Commands" ) )
    .DefiningEventsAs( t => t.Namespace != null && t.Namespace.EndsWith( ".Events" ) );

using ( var bus = Bus.Create( cfg ).Start() )
{
    Console.Read();
}

Here, we’re declaring that the Bus will use in-memory persistence (rather than any disk-based persistence of messages), and we’re saying that all of our command messages are defined within a namespace that ends with the string “.Commands” and that all of our event messages are defined within a namespace ending with the string “.Events”.

Mauro then shows all of this theory with some code samples. He has an extensive set of samples that show all virtually all aspects of NServiceBus and this solution is freely available on GitHub at the following URL:  https://github.com/mauroservienti/NServiceBus.Samples

Mauro goes on to state that when sending and recieving commands, the subscriber will usually work with concrete classes when handling messages for that specific command, however, when sending or receiving event messages, the subscriber will work with interfaces rather than concrete classes.  This is a best practice and helps greatly with versioning.

NServiceBus allows you to use your own persistence store for persisting messages.  A typical store used is RavenDB, but virtually anything can be used.  There's only two interfaces that need to be implemented by a storage provider, and many well-known databases and storage mechanisms (RavenDB, NHibernate/SQL Server etc.) have integrations with NServiceBus such that they can be used as persistent storage. NServiceBus can also use third-party message queues.  MSMQ, RabbitMQ, SQL Server, Azure ServiceBus etc. can all be used.  By default NServiceBus uses the built-in Windows MSMQ for the messaging.

Mauro goes on to talk about state.  He asks, “What if you need state during a long-running workflow of message passing?”  He explains how NServiceBus accomplishes this using “Sagas”.    Sagas are durable, stateful and reliable, and they guarantee state persistence across message handling.  They can express message and state correlation and they empower "timeouts" within the system to make decisions in an asynchronous world – i.e. they allow a command publisher to be notified after a specific "timeout" of elapsed time as to whether the command did what was expected or if something went wrong.   Mauro demonstrates this using his NServiceBus sample code.

Mauro explains how the business endpoints are responsible for storing the business state used at each stage (or step) of a saga.  The original message that kicks off a saga only stores the "orchestration" state of the saga (for example, an Order management service could start a  saga that uses an Order Creation service, a Warehouse Service and a Shipping service that creates an order, picks the items to pack and then finally ships them).

The final part of Mauro’s talk is about monitoring and how we can monitor the various messages and flow through all of the messages passing around an inherently asynchronous system.  He states that auditing is a key feature, and that this is required when we have many asynchronous messages floating around a system in a disparate fashion.  NServiceBus provides some "behind-the-scenes" part of the software called "ServiceControl".  ServiceControl sits in the background of all components within a system that are publishing or subscribing to NServiceBus messages and it keeps it's own copy of all messages sent and received within that entire system.  It therefore allows us to have a single place where we can get a complete overview of all of the messages from the entire system along with their current state.

serviceinsight-sagaflow The company behind NServiceBus also provides separate software called “ServiceInsight”, which Mauro quickly demonstrates to us showing how it provides a holistic overview and monitoring of the entire message passing process and the instantiation and individual steps of long-running sagas.  It displays all of this data in a user interface that looks not dissimilar to a SSIS (SQL Server Integration Service) workflow diagram. 

Mauro states that handling asynchronous messages can be hard.  In a system built with many disparate messages, we cannot ever afford to lose a single message.  To prevent message loss, Mauro says that we should never use try/catch blocks within our business code.  He states that NServiceBus will automatically "add" this kind error handling within the creation, generation and sending of messages.  We need to consider transient failures as well as business errors.  NServiceBus will perform it’s own retries for transient failures of messages but business errors must be handled by our own code.  Eventually, transient errors in sent messages that fail to be delivered after the configured amount of maximum retries are placed into a special error message queue by NServiceBus itself, and this allows us to handle these failed messages in this queue as special cases.  To this end, Particular Software also have a separate piece of software called "ServicePulse" which allows monitoring of the entire the infrastructure.  This includes all message endpoints to see if they’re up and available to send/receive messages and well as full monitoring of the failed message queue.

IMG_20150425_155100image (3) After Mauro’s talk it was time for another break.  Unlike the earlier breaks throughout the day, this one was a bit special.  As well as the usual teas and coffees that were available all day long, this break treated all of the attendees to some lovely cream teas!  This was a very pleasant surprise and ensured that all conference attendees were incredibly well-fed throughout the entire conference.  Kudos to the organisers, and specifically the sponsors who allowed all this to be possible.

 

After our lovely break with the coffee and cream teas, it was on to the second session of the afternoon and indeed, the final session of the whole DDD event.  The final session was entitled “Monoliths to Microservices : A Journey”, presented by Sam Elamin.

IMG_20150425_160220 Sam works for Just Eat, and his talk is all about how he has gone on a journey within his work to move from large, monolithic applications to re-implementing the required functionality in a more leaner, distributed system composed largely of micro-services.

Sam firsts mentions the motivation behind his talk: Failure.  He describes how we learn from our failures, and states that we need to talk about our failures more as it’s only from failure that we can actually really improve. 

He asks, “Why do we build monoliths?”  As developers, we know it will become painful over time but we build these monolithic systems because we’re building a system very quickly in order to ship it fast.  People then use our system and we, over time, add more and more features into it. We very rarely, if ever, get the opportunity to go back and break things down into better structured code and implement a better architecture.  Wanting to spend time performing such work is often a very hard sell to the business as we’re talking to them about a pain that they don’t feel.  It’s only the developers who feel the pain during maintenance work.

Sam then states that it’s not all a bed of roses if we break down our systems into smaller parts.  Breaking down a monolithic application into smaller components reduces the complexity of each individual component but that complexity isn’t removed from the system.  It’s moved from within the the individual components to the interactions between the components.

Sam shares a definition of "What is a microservice?"  He says that Greg Young once said, "It’s anything you can rewrite in a week".  He states that a micro service should be a "business context", i.e. a single business responsibility and discrete piece of business functionality.

But how do we start to move a monolithic application to a smaller, microservices-based application?  Well, Sam tells us that he himself started with DDD (Domain Driven Design) for the design of the application and to determine bounded contexts – which are the distinct areas of services or functionality within the system.  These boundaries would then communicate, as the rest of the system communicated, with messages in a pub/sub (Publisher/Subscriber) mechanism, and each conceptual part of the system was entirely encapsulated by an interface – all other parts of the system could only communicate through this interface.

Sam then talks about something that they hadn’t actually considered when the first started on the journey: Race Hazards.  Race Hazards, or Race Conditions as they can also be known, within a distributed message-based architecture are when there are failures in the system due to messages being lost or being recieved out of order and the inability of the system to deal with this.  Testing for these kind of failures is hard as asynchronous messages can be difficult to test by their very nature. 

Along the journey, Sam discovered that things weren’t proceeding as well as expected.  The boundaries within the system were unclear and there was no clear ownership of each bounded context within the business.  This is something that is really needed in order for each context to be accurately defined and developed.  It’s also really important to get a good ubiquitous language - which is a language and way of talking about the system that is structured around the domain model and used by all team members to connect all the activities of the team with the software - correct so that time and effort is not wasted trying to translate between code "language" and domain language.

Sam mentioned how the teams’ overly strict code review process actually slowed them down.  He says that Code Reviews are usually used to address the symptom rather than the underlying problem which is not having good enough tests of the software, services and components.  He says this also applies to ensuring the system has the necessary amount of monitoring, auditing and metrics implemented within to ensure speedy diagnosis of problems.

Sam talks about how, in a distributed system of many micro-services, there can be a lot of data duplication.  One area of the system can deal with it’s own definition of a “customer”, whilst another area of the system deals with it’s own definition of that same “customer” data.  He says that businesses fear things like data duplication, but that it really shouldn't matter in a distributed system and it's often actually a good thing – this is frequently seen in systems that implement CQRS patterns, eventual consistency and correct separation of concerns and contexts due to implementation of DDD.  Sam states that, for him, code decoupling is vastly favourable to code duplication – if you have to duplicate some code in two different areas of the system in order to correctly provide a decoupled environment, then that’s perfectly acceptable and that introducing coupling simply to avoid code duplication is bad.

He further states that business monitoring (in production monitoring of the running application and infrastructure) is also favourable to acceptance tests.  Continual monitoring of the entire production system is the most useful set of metrics for the business, and that with metrics comes freedom.  You can improve the system only when you know what bits you can easily replace, and only when you know what bits actually need to be replaced for the right reasons (i.e. replacing one component due to low performance where the business monitoring has identified that this low-performance component is a genuine system bottleneck).   Specifically, business monitoring can provide great insights not just into the system’s performance but also the businesses performance and trends, too.  For example, monitoring can surface data such as spikes in usage.  From here we can implement alerts based upon known metrics – i.e. We know we get around X number of orders between 6pm and 10pm on a Friday night, if this number drops by Y%, then send an alert.

Sam talks about "EventStorming" (a phrase coined by Alberto Brandolini) with the business/domain experts.  He says he would get them all together in a room and talk about the various “events” and the various “commands” that exist within the business domain, but whilst avoiding any vocabulary that is technical. All language used is expressed within the context of the business domain. (i.e. Order comes in, Product is shipped etc).  He states that using Event Storming really helped to move the development of system forward and really helped to define both the correct boundaries of the domain contexts, and helped to define the functionality that each separate service within the system would provide.

Finally, Sam says the downside of moving to microservices are that it's a very time-consuming approach and can be very expensive (both in terms of financial cost and time cost) to define the system, the bounded contexts and the individual commands and events of the system.  Despite this, it’s a great approach and using it within his own work, Sam has found that it’s provided the developers within his company with a reliable, scalable and maintainable system, and most importantly it’s provided the business with a system that supports their business needs both now and into the future.

IMG_20150425_170540

After Sam’s session was over, we all reconvened in the common room and communal hall for the final section of the day.  This was the prize-draw and final wrap-up.

The organisers first thanked the very generous sponsors of the event as without them the event simply would not have happened.  Moreover, we wouldn’t have been anywhere nearly as well fed as we were! 

There were a number of prize draws, and the first batch was a prize from each of the in-house sponsors who had been exhibiting at the event.  Prizes here ranged from a ticket to the next NDC conference to a Raspberry Pi kit.

After the in-house sponsors had given away their individual prizes, there was a “main” prize draw based upon winners drawn randomly from the event feedback that was provided about each session by each conference attendee.  Amongst the prizes were iPad mini’s, a Nexus 9 tablet, technical books, laser pens and a myriad of software licenses.  I sat as the winner’s names were read out, watching as each person as called and the iPad’s and Nexus 9 were claimed by the first few people who were drawn as a winner.  Eventually, my own name was read out!  I was very happy and went up to the desk to claim my prize.  Unfortunately, the iPad’s and Nexus 9 were already gone, but I managed to get myself a license for PostSharp Ultimate!IMG_20150504_143116

After this, the day’s event was over.  There was a customary geek dinner that was to take place at a local Tapas restaurant later in the evening, however, I had a long drive home from Bristol back to the North-West ahead of me so I was unable to attend the geek dinner after-event.

So, my first DDD South-West was over, and I must say it was an excellent event.  Very well run and organised by the organisers and the staff of the venue and of course, made possible by the fantastic sponsors.  I’d had a really great day and I can’t wait for next year’s DDDSW event!