DDD 12 In Review

IMG_20170610_085641On Saturday 10th June 2017 in sunny Reading, the 12th DeveloperDeveloperDeveloper event was held at Microsoft’s UK headquarters.  The DDD events started at the Microsoft HQ back in 2005 and after some years away and a successful revival in 2016, this year’s DDD event was another great occasion.

I’d travelled down to Reading the evening before, staying over in a B&B in order to be able to get to the Thames Valley Park venue bright and early on the Saturday morning.  I arrived at the venue and parked my car in the ample parking spaces available on Microsoft’s campus and headed into Building 3 for the conference.  I collected my badge at reception and was guided through to the main lounge area where an excellent breakfast selection awaited the attendees.  Having already had a lovely big breakfast at the B&B where I was staying, I simply decided on another cup of coffee, but the excellent breakfast selection was very well appreciated by the other attendees of the event.

IMG_20170610_085843

I found a corner in which to drink my coffee and double-check the agenda sheets that we’d been provided with as we entered the building.   There’d been no changes since I’d printed out my own copy of the agenda a few nights earlier, so I was happy with the selections of talks that I’d chosen to attend.   It’s always tricky at the DDD events as there are usually at least 3 parallel tracks of talks and invariably there will be at least one or two timeslots where multiple talks that you’d really like to attend will overlap.

IMG_20170611_104806Occasionally, these talks will resurface at another DDD event (some of the talks and speakers at this DDD in Reading had given the same or similar talks in the recently held DDD South West in Bristol back in May 2017) and so, if you’re really good at scheduling, you can often catch a talk at a subsequent event that you’d missed at an earlier one!

As I finished off my coffee, more attendees turned up and the lounge area was now quite full.  It wasn’t too much longer before we were directed by the venue staff to make our way to the relevant conference rooms for our first session of the day.

Wanting to try something a little different, I’d picked Andy Pike’s Introducing Elixir: Self-Healing Applications at ZOMG scale as my first session.

Andy started his sessions by talking about where Elixir, the language came from.  It’s a new language that is built on top of the Erlang virtual machine (BEAM VM) and so has it’s roots in Erlang.  Like Erlang, Elixir compiles down to the same bytecode that the VM ultimately runs.  Erlang was originally created at the Ericsson company in order to run their telecommunications systems.  As such, Erlang is built on top of a collection of middleware and libraries known as OTP or Open Telecom Platform.  Due to its very nature, Erlang and thus Elixir has very strong concurrency, fault tolerance and distributed computing at it’s very core.  Although Elixir is a “new” language in terms perhaps of it’s adoption, it’s actually been around for approximately 5 years now and is currently on version 1.4 with version 1.5 not too far off in the future.

Andy tells of how Erlang is used by WhatsApp to run it’s global messaging system.  He provides some numbers around this.  WhatsApp have 1.2 billion users, process around 42 billion messages per day and can manage to handle around 2 million connections on each server!   That’s some impressive performance and concurrency figures and Andy is certainly right when he states that very few other, if any, platforms and languages can boast such impressive statistics.

Elixir is a functional language and its syntax is heavily inspired by Ruby.  The Elixir language was first designed by José Valim, who was a core contributor in the Ruby community.  Andy talks about the Elixir REPL that ships in the Elixir installation which is called “iex”.  He shows us some slides of simple REPL commands showing that Elixir supports all the same basic intrinsic types that you’d expect of any modern language, integers, strings, tuples and maps.  At this point, things look very similar to most other high level functional (and perhaps some not-quite-so-functional) languages, such as F# or even Python.

Andy then shows us something that appears to be an assignment operator, but is actually a match operator:

a = 1

Andy explains that this is not assigning 1 to the variable ‘a’, but is “matching” a against the constant value 1.  If ‘a’ has no value, Elixir will bind the right-hand side operand to the variable, then perform the same match.  An alternative pattern is:

^a = 1

which performs the matching without the initial binding.  Andy goes on to show how this pattern matching can work to bind variables to values in a list.  For example, given the code:

success = { :ok, 42 }
{ ;ok, result } = success

this will bind the value of 42 to the variable result and subsequently perform the match of the tuple to the variable ‘success’ which now matches.  We’re told how the colon in front of the ok variable makes it an “atom”.  This is similar to a constant, where the variables name is it’s own value.

IMG_20170610_095341Andy shows how Elixir’s code is grouped into functions and functions can be contained within modules.  This is influenced by Ruby and how it also groups it’s code.  We then move on to look at lists.  These are handled in a very similar way to most other functional languages in that a list is merely seen as a “head” and a “tail”.  The “head” is the first value in the list and the “tail” is the entire rest of the list.  When processing the items in a list in Elixir, you process the head and then, perhaps recursively, call the same method passing the list “tail”.  This allows a gradual shortening of the list as the “head” is effectively removed with each pass through the list.  In order for such recursive processing to be performant, Elixir includes tail-call optimisation which allows the compiler to eliminate the necessity of maintaining state through each successive call to the method.  This is possible when the last line of code in the method is the recursive call.

Elixir also has guard clauses built right into the language.  Code such as:

def what_is(x) when is_number(x) and X > 0 do…

helps to ensure that code is more robust by only being invoked when ‘x’ is not only a number but also has some specific value too.  Andy states that, between the usage of such guard clauses and pattern matching, you can probably eliminate around 90-95% of all conditionals within your code (i.e. if x then y).

Elixir is very expressive within it’s allowed characters for function names, so functions can (and often do) have things like question marks in their name.  It’s a convention of the language that methods that return a Boolean value should end in a question mark, something shared with Ruby also, i.e. String.contains? "elixir of life", "of"  And of course, Elixir, like most other functional languages has a pipe operator(|>) which allows the piping of the result of one function call into the input of another function call, so instead of writing:

text = "THIS IS MY STRING"
text = String.downcase(text)
text = String.reverse(text)
IO.puts text

Which forces us to continually repeat the “text” variable, we can instead write the same code like this:

text = "THIS IS MY STRING"
|> String.downcase
|> String.reverse
|> IO.puts

Andy then moves on to show us an element of the Elixir language that I found particular intriguing, doctests.  Elixir makes function documentation a first class citizen within the language and not only does the doctest code provide documentation for the function – Elixir has a function, h, that when passed another function name as a parameter, will display the help for that function – but also serves as a unit test for the function, too!  Here’s a sample of an Elixir function containing some doctest code:

defmodule MyString do
   @doc ~S"""
   Converts a string to uppercase.

   ## Examples
       iex> MyString.upcase("andy")
       "ANDY"
   """
   def upcase(string) do
     String.upcase(string)
   end
end

The doctest code not only shows the textual help text that is shown if the user invokes the help function for the method (i.e. “h MyString”) but the examples contained within the help text can be executed as part of a doctest unit test for the MyString method:

defmodule MyStringTest do
   use ExUnit.Case, async: true
   doctest MyString
end

This above code uses the doctest code inside the MyString method to invoke each of the provided “example” calls and assert that the output is the same as that defined within the doctest!

After taking a look into the various language features, Andy moves on talk about the real power of Elixir which it inherits from it’s Erlang heritage – processes.  It’s processes that provide Erlang, and thus Elixir with the ability to scale massively, provide it with fault-tolerance and its highly distributed features.

Wen Elixir functions are invoked, they can effectively be “wrapped” within a process.  This involves spawning a process that contains the function.  Processes are not the same as Operating System processes, but are much more lightweight and are effectively only a C struct that contains a pointer to the function to call, some memory and mailbox (which will hold messages sent to the function).  Processes have a Process ID (PID) and will, once spawned, continue to run until the function contained within terminates or some error or exception occurs.  Processes can communicate with other processes by passing messages to those processes.  Here’s an example of a very simple module containing a single function and how that function can be called by spawning a separate process:

defmodule HelloWorld do
     def greet do
		IO.puts "Hello World"
     end
end

HelloWorld.greet                 # This is a normal function call.
pid = spawn(HelloWorld, :greet)  # This spawns a process containing the function

Messages are sent to processes by invoking the “send” function, providing the PID and the parameters to send to the function:

send pid, { :greet, “Andy” }

This means that invoking functions in processes is almost as simple as invoking a local function.

Elixir uses the concept of schedulers to actually execute processes.  The Beam VM will supply one scheduler per core of CPU available, giving the ability to run highly concurrently.  Elixir also uses supervisors as part of the Beam VM which can monitor processes (or even monitor other supervisors) and can kill processes if they misbehave in unexpected ways.  Supervisors can be configured with a “strategy”, which allows them to deal with errant processes in specific ways.  One common strategy is one_for_one which means that if a given process dies, a single new one is restarted in it’s place.

Andy then talks about the OTP heritage of Elixir and Erlang and from this there is a concept of a “GenServer”.  A GenServer is a module within Elixir that provides a consistent way to implement processes.  The Elixir documentation states:

A GenServer is a process like any other Elixir process and it can be used to keep state, execute code asynchronously and so on. The advantage of using a generic server process (GenServer) implemented using this module is that it will have a standard set of interface functions and include functionality for tracing and error reporting. It will also fit into a supervision tree.

The GenServer provides a common set of interfaces and API’s that all processes can adhere to, allowing common conventions such as the ability to stop a process, which is frequently implemented like so:

GenServer.cast(pid, :stop)

Andy then talks about “nodes”.  Nodes are separate actual machines that can run Elixir and the Beam VM and these nodes can be clustered together.  Once clustered, a node can start a process not only on it’s own node, but on another node entirely.  Communication between processes, irrespective of the node that the process is running on is handled seamlessly by the Beam VM itself.  This provides Elixir solutions great scalability, robustness and fault-tolerance.

Andy mentions how Elixir has it’s own package manager called “hex” which gives access to a large range of packages providing lots of great functionality.  There’s “Mix”, which is a build tool for Elixir, OTP Observer for inspection of IO, memory and CPU usage by a node, along with “ETS”, which is an in-memory key-value store, similar to Redis, just to name a few.

Andy shares some book information for those of us who may wish to know more.  He suggests “Programming Elixir” and “Programming Phoenix”, both part of the Pragmatic Programmers series and also “The Little Elixir & OTP Guidebook” from Manning.

Finally, Andy wraps up by sharing a story of a US-based Sports news website called “Bleacher Report”.  The Bleacher Report serves up around 1.5 billion pages per day and is a very popular website both in the USA and internationally.  Their entire web application was originally built on Ruby on Rails and they required approximately 150 servers in order to meet the demand for the load.  Eventually, they re-wrote their application using Elixir.  They now serve up the same load using only 5 servers.  Not only have they reduced their servers by an enormous factor, but they believe that with 5 servers, they’re actually over-provisioned as the 5 servers are able to handle the load very easily.  High praise indeed for Elixir and the BeamVM.  Andy has blogged about his talk here.

After Andy’s talk, it was time for the first coffee break.  Amazingly, there were still some breakfast sandwiches left over from earlier in the morning, which many attendees enjoyed.  Since I was still quite full from my own breakfast, I decided a quick cup of coffee was in order before heading back to the conference rooms for the next session.  This one was Sandeep Singh’s Goodbye REST; Hello GraphQL.

IMG_20170610_103642

Sandeep’s session is all about the relatively new technology of GraphQL.  It’s a query language for your API comprising of a server-side runtime for processing queries along with a client-side framework providing an in-browser IDE, called GraphiQL.  One thing that Sandeep is quick to point out is that GraphQL has nothing to do with Graph databases.  It can certainly act as a query layer over the top of a graph database, but can just as easily query any kind of underlying data such as RDBMS’s through to even flat-file data.

Sandeep first talks about where we are today with API technologies.  There’s many of them, XML-RPC, REST, ODATA etc. but they all have their pros and cons.  We explore REST in a bit more detail, as that’s a very popular modern day architectural style of API.  REST is all about resources and working with those resources via nouns (the name of the resource) and verbs (HTTP verbs such as POST, GET etc.), there’s also HATEOAS if your API is “fully” compliant with REST.

Sandeep talks about some of the potential drawbacks with a REST API.   There’s the problem of under-fetching.  This is seen when a given application’s page is required to show multiple resources at once, perhaps a customer along with recently purchased products and a list of recent orders.  Since this is three different resources, we would usually have to perform three distinct different API calls in order to retrieve all of the data required for this page, which is not the most performant way of retrieving the data.  There’s also the problem of over-fetching.  This is where REST API’s can take a parameter to instruct them to include additional data in the response (i.e. /api/customers/1/?include=products,orders), however, this often results in data that is additional to that which is required.  We’re also exposing our REST endpoint to potential abuse as people can add arbitrary inclusions to the endpoint call.  One way to get around the problems of under or over fetching is to create ad-hoc custom endpoints that retrieve the exact set of data required, however, this can quickly become unmaintainable over time as the sheer number of these ad-hoc endpoints grows.

IMG_20170610_110327GraphQL isn’t an architectural style, but is a query language that sits on top of your existing API. This means that the client of your API now has access to a far more expressive way of querying your API’s data.  GraphQL responses are, by default, JSON but can be configured to return XML instead if required, the input queries themselves are similarly structured to JSON.  Here’s a quick example of a GraphQL query and the kind of data the query might return:

{
	customer {
		name,
		address
	}
}
{
	"data" : {
		"customer" : {
			"name" : "Acme Ltd.",
			"address" : ["123 High Street", "Anytown", "Anywhere"]
		}
	}
}

GraphQL works by sending the query to the server where it’s translated by the GraphQL server-side library.  From here, the query details are passed on to code that you have written on the server in order that the query can be executed.  You’ll write your own types and resolvers for this purpose.  Types provide the GraphQL queries with the types/classes that are available for querying – this means that all GraphQL queries are strongly-typed.  Resolvers tell the GraphQL framework how to turn the values provided by the query into calls against your underlying API.  GraphQL has a complete type system within it, so it supports all of the standard intrinsic types that you’d expect such as strings, integers, floats etc. but also enums, lists and unions.

Sandeep looks at the implementations of GraphQL and explains that it started life as a NodeJS package.  It’s heritage is therefore in JavaScript, however, he also states that there are many implementations in numerous other languages and technologies such as Python, Ruby, PHP, .NET and many, many more.  Sandeep says how GraphQL can be very efficient, you’re no longer under or over fetching data, and you’re retrieving the exact fields you want from the tables that are available.  GraphQL allows you to version your queries and types also by adding deprecated attributes to fields and types that are no longer available.

IMG_20170610_110046We take a brief looks at the GraphiQL GUI client which is part of the GraphQL client side library.  It displays 3 panes within your browser showing the schemas, available types and fields and a pane allowing you to type and perform ad-hoc queries.  Sandeep explains that the schema and sample tables and fields are populated within the GUI by performing introspection over the types configured in your server-side code, so changes and additions there are instantly reflected in the client.  Unlike REST, which has obvious verbs around data access, GraphQL doesn't really have these.  You need introspection over the data to know how you can use that data, however, this is a very good thing.  Sandeep states how introspection is at the very heart of GraphQL – it is effectively reflection over your API – and it’s this that leads to the ability to provide strongly-typed queries.

We’re reminded that GraphQL itself has no concept of authorisation or any other business logic as it “sits on top of” the existing API.  Such authorisation and business logic concerns should be embedded within the API or a lower layer of code.  Sandeep says that the best way to think of GraphQL is like a “thin wrapper around the business logic, not the data layer”.

IMG_20170610_111157GraphQL is not just for reading data!  It has full capabilities to write data too and these are known as mutations rather than queries.  The general principle remains the same and a mutation is constructed using JSON-like syntax and sent to the server for resolving to a custom method that will validate the data and persist it the data store by invoking the relevant API endpoint.  Sandeep explains how read queries can be nested, so you can actually send one query to the server that actually contains syntax to perform two queries against two different resources.    GraphQL has a concept of "loaders".  These can batch up actual queries to the database to prevent issues when asking for such things all Customers including Orders.  Doing something like this normally results in a N+1 issue where by Orders are retrieved by issuing a separate query for each customer, resulting in degraded performance.  GraphQL loaders works by enabling the rewriting of the underlying SQL that can be generated for retrieving the data so that all Orders are retrieved for all of the required Customers in a single SQL statement.  i.e. Instead of sending queries like the following to the database:

SELECT CustomerID, CustomerName FROM Customer
SELECT OrderID, OrderNumber, OrderDate FROM Order WHERE CustomerID = 1
SELECT OrderID, OrderNumber, OrderDate FROM Order WHERE CustomerID = 2
SELECT OrderID, OrderNumber, OrderDate FROM Order WHERE CustomerID = 3

We will instead send queries such as:

SELECT CustomerID, CustomerName FROM Customer
SELECT OrderID, OrderNumber, OrderDate FROM Order WHERE CustomerID IN (1,2,3)

IMG_20170610_110644Sandeep then looks at some of the potential downsides of using GraphQL.  Caching is somewhat problematic as you can no longer perform any caching at the network layer as each query is now completely dynamic.  Despite the benefits, there are also performance considerations if you intend to use GraphQL as your underlying data needs to be correctly structured in order to work with GraphQL in the most efficient manner.  GraphQL Loaders should also be used to ensure N+1 problems don’t become an issue.  There’s security considerations too. You shouldn’t expose anything that you don’t want to be public since everything through the API is available to GraphQL and you’ll need to be aware of potentially malicious queries that attempt to retrieve too much data.  One simple solution to such queries is to use a timeout.  If a given query takes longer than some arbitrarily defined timeout value, you simply kill the query, however this may not be the best approach.  Other approaches taken by big websites currently using such functionality is to whitelist all the acceptable queries.  If a query is received that isn’t in the whitelist, it doesn’t get run.  You also can’t use HTTP codes to indicate status or contextual information to the client.  Errors that occur when processing the GraphQL query are contained within the GraphQL response text itself which is returned to the client with 200 HTTP success code.  You’ll need to have your own strategy for exposing such data to the user in a friendly way.   Finally Sandeep explains that, unlike other querying technologies such as ODATA, GraphQL has no intrinsic ability to paginate data.  All data pagination must be built into your underlying API and business layer, GraphQL will merely pass the paging data – such as page number and size – onto the API and expect the API to correctly deal with limiting the data returned.

IMG_20170610_103123After Sandeep’s session, it’s time for another coffee break.  I quickly grabbed another coffee from the main lounge area of the conference, this time accompanied by some rather delicious biscuits, before consulting my Agenda sheet to see which room I needed to be in for my next session.  Turned out that the next room I needed to be in was the same one I’d just left!  After finishing my coffee and biscuits I headed back to Chicago 1 for the next session and the last one before the lunch break.  This one was Dave Mateer’s Fun With Twitter Stream API, ELK Stack, RabbitMQ, Redis and High Performance SQL.

Dave’s talk is really all about performance and how to process large sets of data in the most efficient and performant manner.  Dave’s talk is going to be very demo heavy and so to give us a large data set to work with, Dave starts by looking at Twitter, and specifically, it’s Stream API.  Dave explains that the full Twitter firehose, which is reserved only for Twitter’s own use, currently has 1.3 million tweets per second flowing through it.  As a consumer, you can get access to a deca-firehose which contains 1/10th of the full firehose (i.e. 130000 tweets per second) but this costs money, however, Twitter does expose a freely available Stream API although it’s limited to 50 tweets per second.  This is still quite a sizable amount of data to process.

IMG_20170610_120247Dave starts by showing us a demo of a simple C# console application that uses the Twitter Stream API to gather real-time tweets for the hashtag of #scotland which are then echoed out to the console.   Our goal is to get the tweet data into a SQL Server database as quickly and as efficiently as possible.  Dave now says that to simulate a larger quantity of data, he’s going to read in a number of pre-saved text files containing tweet data that he’s previously collected.  These files represent around 6GB of raw tweet data, containing approximately 1.9 million tweets.  He then adds to the demo to start saving the tweets into SQL Server.  Dave mentions that he's using Dapper to access SQL Server and that previously he tried such things using Entity Framework, which was admittedly some time in the past, but that it was a painful experience and not very performant.   Dave likes Dapper as it's a much simpler abstraction over the database, so therefore much more performant.  It's also a lot easier to optimize your queries when using Dapper as the abstraction isn’t so great and you’re not hiding the implementation details too much.

IMG_20170610_121958Next, Dave shows us a Kibana interface.  As well as writing to SQL Server, he's also saving the tweets to a log file using SeriLog and then using LogStash to send those logs to ElasticSearch allowing viewing the raw log data with Kibana (also known as the ELK stack).  Dave then shows us how easy it is to really leverage the power of tools like Kibana by creating a dashboard for all of the data.

From here, Dave begins to talk about performance and just how fast we can process the large quantity of tweet data.  The initial run of the application, which was simply reading in each tweet from the file and performing an INSERT via Dapper to insert the data to the SQL Server database was able to process approximately 420 tweets per second.  This isn’t bad, but it’s not a great level of performance.  Dave digs out SQL Server Profiler to see where the bottle-necks are within the data access code, and this shows that there are some expensive reads – the data is normalized so that the user that a tweet belongs to is stored in a separate table and looked up when needed.  It’s decided that adding indexes on the relevant columns used for the lookup data might speed up the import process.  Sure enough, after adding the indexes and re-running the tweet import, we improve from 420 to 1600 tweets per second.  A good improvement, but not an order of magnitude improvement.  Dave wants to know if we can change the architecture of our application and go even faster.  What if we want to try to achieve a 10x level of increase in performance?

Dave states that, since his laptop has multiple cores, we should start by changing the application architecture to make better use of parallel processing across all of the available cores in the machine.  We set up an instance of the RabbitMQ message queue, allowing us to read the tweet data in from the files and send it to a RabbitMQ queue.  The queue is explicitly set to durable and each message is set to persistent in order to ensure we have the ability to continue where we left off from in the event of a crash or server failure.  From here, we can have multiple instances of another application that pull messages off the queue, leveraging RabbitMQ’s ability to effectively isolate each of the client consumers, ensuring that the same message is not sent to more than one client.  Dave then sets up Redis.  This will be used for "lookups" that are required when adding tweet data.  So users (and other data) is added to the DB first, then all data is cached in Redis, which is an in-memory key/value store and is often used for caching scenarios.  As tweets are added to the RabbitMQ message queue, required ID/Key lookups for Users and other data are done using the Redis cache rather than performing a SQL Server query for the data, thus improving the performance.  Once processed and the relevant values looked up from Redis, Dave uses SQL Server Bulk Copy to get the data into SQL Server itself.  SQL Server Bulk Copy provides a significant performance benefit over using standard INSERT T-SQL statements.   For Dave’s purposes, he Bulk Copies the data into temporary tables within SQL Server and then, at the end of the import, runs a single T-SQL statement to copy the data from the temporary tables to the real tables. 

IMG_20170610_125524Having re-architected the solution in this manner, Dave then runs his import of 6GB of tweet data again.  As he’s now able to take advantage of the multiple CPU cores available, he runs 3 console applications in parallel to process all of the data.  Each console application completes their jobs within around 30 seconds, and whilst they’re running, Dave shows us the Redis dashboard which is indicating that Redis is receiving around 800000 hits to the cache per second!  The result, ultimately, is that the application’s performance has increased from processing around 1600 tweets per second to around 20000!  An impressive improvement indeed!

Dave then looks at some potential downsides of such re-architecture and performance gains.  He shows how he’s actually getting duplicated data imported into his SQL Server and this is likely due to race conditions and concurrency issues at one or more points within the processing pipeline.  Dave then quickly shows us how he’s got around this problem with some rather ugly looking C# code within the application (using temporary generic List and Dictionary structures to pre-process the data).  Due to the added complexity that this performance improvement brings, Dave argues that sometimes slower is actually faster in that, if you don't absolutely really need so much raw speed, you can remove things like Redis from the architecture, slowing down the processing (albeit to a still acceptable level) but allowing a lot of simplification of the code.

IMG_20170610_130715After Dave’s talk was over, it was time for lunch.  The attendees made their way to the main conference lounge area where we could choose between lunch bags of sandwiches (meat, fish and vegetarian options available) or a salad option (again, meat and vegetarian options).

Being a recently converted vegetarian, I opted for a lovely Greek Salad and then made my way to the outside area along with, it would seem, most of the other conference attendees.

IMG_20170610_132208It had turned into a glorious summer’s day in Reading by now and the attendees and some speakers and other conference staff were enjoying the lovely sunshine outdoors while we ate our lunch.  We didn’t have too long to eat, though, as there were a number of Grok talks that would be taking place inside one of the major conference rooms over lunchtime.  After finishing my lunch (food and drink was not allowed in the session rooms themselves) I heading back towards Chicago 1 where the Grok Talks were to be held.

I’d managed to get there early enough in order to get a seat (these talks are usually standing room only) and after settling myself in, we waited for the first speaker.  This was to be Gary Short with a talk about Markov, Trump and Countering Radicalisation.

Gary starts by asking, “What is radicalisation?”.  It’s really just persuasion – being able to persuade people to hold an extreme view of some information.  Counter-radicalisation is being able to persuade people to hold a much less extreme belief.  This is hard to achieve, and Gary says that it’s due to such things as Cognitive Dissonance and The “Backfire” Effect (closely related to Confirmation Bias) – two subjects that he doesn’t have time to go into in a 10 minute grok talk, but he suggests we Google them later!  So, how do we process text to look for signs of radicalisation?  Well, we start with focus groups and corpus's of known good text as well as data from previously de-radicalised people (Answers to questions like “What helped you change?” etc.)  Gary says that Markov chains are used to process the text.  They’re a way of “flowing” through a group of words in such a way that the next word is determined based upon statistical data known about the current word and what’s likely to follow it.  Finally, Gary shows us a demo of some Markov chains in action with a C# console application that generates random tweet-length sentences based upon analysis of a corpus of text from Donald Trump.  His application is called TweetLikeTrump and is available online.

The next Grok talk is Ian Johnson’s Sketch Notes.  Ian starts by defining Sketch Notes.  They’re visual notes that you can create for capturing the information from things like conferences, events etc.   He states that he’s not an artist, so don’t think that you can’t get started doing Sketch Noting yourself.  Ian talks about how to get started with Sketch Notes.  He says it’s best to start by simply practising your handwriting!   Find a clear way of quickly and clearly writing down text using pen & paper and then practice this over and over.  Ian shared his own structure of a sketch note, he places the talk title and the date in the top left and right corners and the event title and the speaker name / twitter handle in the bottom corners.  He goes on to talk more about the component parts of a sketch note.   Use arrows to create a flow between ideas and text that capture the individual concepts from the talk.  Use colour and decoration to underline and underscore specific and important points – but try to do this when there’s a “lull” in the talk and you don’t necessarily have to be concentrating on the talk 100% as it’s important to remember that the decoration is not as important as the content itself.  Ian shares a specific example.  If the speaker mentions a book, Ian would draw a little book icon and simply write the title of the book next to the icon.  He says drawing people is easy and can be done with a simple circle for the head and no more than 4 or 5 lines for the rest of the body.  Placing the person’s arms in different positions helps to indicate expression.  Finally, Ian says that if you make a mistake in the drawing (and you almost certainly will do, at least when starting out) make a feature of it!  Draw over the mistake to create some icon that might be meaningful in the entire context of the talk or create a fancy stylised bullet point and repeat that “mistake” to make it look intentional!  Ian has blogged about his talk here.

Next up is Christos Matskas’s grok talk on Becoming an awesome OSS contributor.   Christos starts by asking the audience who is using open source software?  After a little thought, virtually everyone in the room raises their hand as we’re all using open source software in one way or another.  This is usually because we'll be working on projects that are using at least one NuGet package, and NuGet packages are mostly open source.  Christos shares the start of his own journey into becoming an OSS contributor which started with him messing up a NuGet Package restore on his first day at a new contract.  This lead to him documenting the fix he applied which was eventually seen by the NuGet maintainers and he was offered to write some documentation for them.  Christos talks about major organisations using open source software including Apple, Google, Microsoft as well as the US Department of Defence and the City of Munich to name but a few.  Getting involved in open source software yourself is a great way to give back to the community that we’re all invariably benefiting from.  It’s a great way to improve your own code and to improve your network of peers and colleagues.  It’s also great for your CV/Resume.  In the US, its almost mandatory that you have code on a GitHub profile.  Even in the UK and Europe, having such a profile is not mandatory, but is a very big plus to your application when you’re looking for a new job.  You can also get free software tools to help you with your coding.  Companies such as JetBrains, RedGate, and many, many others will frequently offer many of their software products for free or a heavily discounted price for open source projects.   Finally, Christos shares a few websites that you can use to get started in contributing to open source, such as up-for-grabs.net, www.firsttimersonly.com and the twitter account @yourfirstPR.

The final grok talk is Rik Hepworth’s Lability.  Lability is a Powershell module, available via Github, that uses Azure’s DSC (Desired State Configuration) feature to facilitate the automated provisioning of complete development and testing environments using Windows Hyper-V.  The tool extends the Powershell DSC commands to add metadata that can be understood by the Lability tool to configure not only the virtual machines themselves (i.e. the host machine, networking etc.) but also the software that is deployed and configured on the virtual machines.  Lability can be used to automate such provisioning not only on Azure itself, but also on a local development machine.

IMG_20170610_141738After the grok talks were over, I had a little more time available in the lunch break to grab another quick drink before once again examining the agenda to see where to go for the first session of the afternoon, and penultimate session of the day.  This one was Ian Russell’s Strategic Domain-Driven Design.

Ian starts his talk by mentioning the “bible” for Domain Driven Design and that’s the book by Eric Evans of the same name.  Ian asks the audience who has read the book to which quite a few people raise their hands.  He then asks who has read past Chapter 14, to which a lot of people put their hands down.  Ian states that, if you’ve never read the book, the best way to get the biggest value from the book is to read the last 1/3rd of the book first and only then return to read the first 2/3rds!

So what is Domain-Driven Design?  It’s an abstraction of reality, attempting to model the real-world we see and experience before us in a given business domain.  Domain-Driven Design (DDD) tries to break down the domain into what is the “core” business domain – these are the business functions that are the very reason for a business’s being – and other domains that exist that are there to support the “core” domain.  One example could be a business that sells fidget spinners.  The actual domain logic involved with selling the spinners to customers would be the core domain, however, the same company may need to provide a dispute resolution service for unhappy customers.  The dispute resolution service, whilst required by the overall business, would not be the core domain but would be a supporting or generic domain.

DDD has a ubiquitous language.  This is the language of the business domain and not the technical domain.  Great care should be taken to not use technical terms when discussing domains with business stake holders, and the terms within the language should be ubiquitous across all domains and across the entire business.  Having this ubiquitous language reduces the chance of ambiguity and ensures that everyone can relate to specific component parts of the domain using the same terminology.  DDD has sub-domains, too.  These are the core domain – the main business functionality, the supporting domains – which exist solely to support the core, and generic domains - such as the dispute resolution domain, which both supports the core domain but is also generic and could apply to other businesses too.  DDD has bounded contexts.  These are like sub-domains but don’t necessarily map directly to sub-domains.  They are explicitly boundaries from other areas of the business.  Primarily, bounded contexts can be developed in software independently from each other.  They could be written by different development teams and could even use entirely different architectures and technologies in their construction.

Ian talks about driving out the core concepts by creating a “shared kernel”.  These are the concepts that exist in the overlaps between bounded contexts.  These concepts don’t have to be shared between the bounded contexts and they may be different – the concept of an “account” might mean different things within the finance bounded context to the concept of an “account” within the shipping bounded context, for example.  Ian talks about the concept of an “anti-corruption layer” as part of each bounded context.  This allows bounded contexts to communicate with items from the shared kernel but where those concepts actually do differ between contexts, the anti-corruption layer will prevent corruption from incorrect implementations of concepts being passed to it.  Ian mentions domain events next.  He says that these are something that are not within Eric Evans’ book but are often documented in other DDD literature.  Domain events are essentially just ”things that occur” within the domain. For example, a new customer registered on the company’s website is an domain event.  Events can be created by users, by the passage of time, by external system, or even by other domain events.

All of this is Strategic domain-driven design.  It’s the modelling and understanding of the business domain without ever letting any technological considerations interfere with the modelling and understanding.  It’s simply good architectural practice and the understanding of how different parts of the business domain interact with, and communicate with other parts of the business domain.

Ian suggests that there’s three main ways in which to achieve strategic domain-driven design.  These are Behavioural Driven Design (BDD), Prototyping and Event Storming.  BDD involves writing acceptance tests for our software application using a domain-specific language within our application code that allows the tests to use the ubiquitous language of the domain, rather than the explicitly technical language of the software solution.  BDD facilitates engagement with business stakeholders in the development of the acceptance tests that form part of the software solution.  One common way to do this is to run three amigos sessions which allow developers, QA/testers and domain experts to write the BDD tests, usually in the standard Given-When-Then style.   Prototyping consists of producing images and “wireframes” that give an impression of how a completed software application could look and feel.  Prototypes can be low-fidelity, just simple static images and mock-ups, but it’s better if you can produce high-fidelity prototypes, which allows varying levels of interaction with the prototype. Tools such as Balsamiq and InVision amongst others can help with the production of high-fidelity prototypes.  Event storming is a particular format of meeting or workshop that has developers and domain experts collaborating on the production of a large paper artefact that contains numerous sticky notes of varying colours.  The sticky notes’ colour represent various artifacts within the domain such as events, commands, users, external systems and others.  Sticky notes are added, amended, removed and moved around the paper on the wall by all meeting attendees.  The resulting sticky notes tends to naturally cluster into the various bounded contexts of the domain, allowing the overall domain design to emerge.  If you run your own Event Storming session, Ian suggests to start by trying to drive out the domain events first, and for each event, attempt to first work backwards to find the cause or causes of that event, then work forwards, investigating what this event causes - perhaps further events or the requirement for user intervention etc.

IMG_20170610_145159Ian rounds off his talk by sharing some of his own DDD best practices.  We should strive for creative collaboration between developers and domain experts at all stages of the project, and a fostering of an environment that allows exploration and experimentation in order to find the best model for the domain, which may not be the first model that is found.  Ian states that determining the explicit context boundaries are far more important than finding a perfect model, and that the focus should always be primarily on the core domain, as this is the area that will bring the greatest value to the business.

After Ian’s talk was over, it was time for another coffee break, the last of the day.  I grabbed a coffee and once again checked my agenda sheet to see where to head for the final session of the day.   This last session was to be Stuart Lang’s Async in C#, the Good, the Bad and the Ugly.

IMG_20170610_153023Stuart starts his session by asking the audience to wonder why he’s doing a session on Async in C#' today.  It’s 2017 and Async/Await has been around for over 6 years!  Simply put, Stuart says that, whilst Async/Await code is fairly ubiquitous these days, there are still many mistakes made with some implementations and the finer points of asynchronous code are not as well understood.  We learn how Async is really an abstraction, much like an ORM tool.  If you use it in a naïve way, it’s easy to get things wrong.

Stuart mentions Async’s good parts.  We can essentially perform non-blocking waiting for background I/O operations, thus allowing our user interfaces to remain responsive.  But then comes the bad parts.  It’s not always entirely clear what is asynchronous code.  If we call an Async method in someone else’s library, there’s no guarantee that what we’re really executing is asynchronous code, even if the method returns a Task<T>.  Async can sometimes leads to the need to duplicate code, for example, when your code has to provide both asynchronous and synchronous versions of each method.  Also, Async can’t be used everywhere.  It can’t be used in a class constructor or inside a lock statement.

IMG_20170610_155205One of the worst side-effects of poor Async implementation is deadlocks.  Stuart states that, when it comes to Async, doing it wrong is worse than not doing it at all!  Stuart shows us a simple Async method that uses some Task.Delay methods to simulate background work.  We then look at what the Roslyn compiler actually translates the code to, which is a large amount of code that effectively implements a complete state machine around the actual method’s code. 

IMG_20170610_160038Stuart then talks about Synchronization Context.  This is an often misunderstood part of Async that allows awaited code, that can be resumed on a different thread, to synchronize with other threads.  For example, if awaited code needs to update some element on the user interface in a WinForms or WPF application, it will need to synchronize that change to the UI thread, it can’t be performed by a thread-pool thread that the awaited code would be running on.  Stuart talks about blocking on Async code, for example:

var result = MyAsyncMethod().Result;

We should try to never do this!   Doing so can cause code within the Async method to block on the synchronization context as awaited code within the Async method may be trying to restart on the same synchronization context that is already blocked by the "outer" code that is performing the .Result on the main Async method.

Stuart then shows us some sample code that runs as an ASP.NET page with the Async code being called from within the MVC controller.  He outputs the threads used by the Async code to the browser, and we then examine variations of the code to see how each awaited piece of code uses either the same or different threads.  One way of overcoming the blocking issue when using .Result at the end of an Async method call is to write code similar to:

var result = Task.Run(() => MyAsyncMethod().Result).Result;

It's messy code, but it works.  However, code like this should be heavily commented because if someone removes that outer Task.Run(); the code will start blocking again and will fail miserably.

Stuart then talks about the vs-threading library which provides a JoinableTaskFactory.   Using code such as:

jtf.Run(() => MyAsyncMethod())

ensures that awaited code resumes on the same thread that's blocked, so Stuart shows the output from his same ASP.NET demo when using the JoinableTaskFactory and all of the various awaited blocks of code can be seen to always run and resume on the same thread.

IMG_20170610_163019Finally, Stuart shares some of the best practices around deadlock prevention.  He asserts that the ultimate goal for prevention has to be an application that can provide Async code from the very “bottom” level of the code (i.e. that which is closest to I/O) all the way up to the “top” level, where it’s exposed to other client code or applications.

IMG_20170610_164756After Stuart’s talk is over, it’s time for the traditional prize/swag giving ceremony.  All of the attendees start to gather in the main conference lounge area and await the attendees from the other sessions which are finishing shortly afterwards.  Once all sessions have finished and the full set of attendees are gathered, the main organisers of the conference take time to thank the event sponsors.  There’s quite a few of them and, without them, there simply wouldn’t be a DDD conference.  For this, the sponsors get a rapturous round of applause from the attendees.

After the thanks, there’s a small prize giving ceremony with many of the prizes being given away by the sponsors themselves.  Like most times, I don’t win a prize – given that there was only around half a dozen prizes, I’m not alone!

It only remained for the organisers to announce the next conference in the DDD calendar, which although doesn’t have a specific date at the moment, will take place in October 2017.  This one is DDD North.  There’s also a surprise announcement of a DDD in Dublin to be held in November 2017 which will be a “double capacity” event, catering for around 600 – 800 conference delegates.  Now there’s something to look forward to!

So, another wonderful DDD conference was wrapped up, and there’s not long to wait until it’s time to do it all over again!

UPDATE (20th June 2017):

As last year, Kevin O'Shaughnessy was also in attendance at DDD12 and he has blogged his own review and write-up of the various sessions that he attended.  Many of the sessions attended by Kevin were different than the sessions that I attended, so check out his blog for a fuller picture of the day's many great sessions.

 

DDD South West 2017 In Review

IMG_20170506_090757This past Saturday 6th May 2017, the seventh iteration of the DDD South West conference was held in Bristol, at the Redcliffe Sixth Form Centre.

This was my second DDD South West conference that I’d attended, having attended the prior DDD South West conference in 2015 (they’d had a year off in 2016), so returning the 2017, the DDD South West team had put on another fine DDD conference.

Since the conference is in Bristol, I’d stayed over the evening before in Gloucester in order to break up the long journey down to the south west.  After an early breakfast on the Saturday morning, I made the relatively short drive from Gloucester to Bristol in order to attend the conference.

IMG_20170506_085557Arriving at the Redcliffe centre, we were signed in and made our way to the main hall where we were treated to teas, coffees and danish pastries.  I’d remembered these from the previous DDDSW conference, and they were excellent and very much enjoyed by the arriving conference attendees.

After a short while in which I had another cup of coffee and the remaining attendees arrived at the conference, we had the brief introduction from the organisers, running through the list of sponsors who help to fund the event and some other house-keeping information.  Once this was done it was time for us to head off to the relevant room to attend our first session.  DDDSW 7 had 4 separate tracks, however, for 2 of the event slots they also had a 5th track which was the Sponsor track – this is where one of the technical people from their primary sponsor gives a talk on some of the tech that they’re using and these are often a great way to gain insight into how other people and companies do things.

I heading off down the narrow corridor to my first session.  This was Ian Cooper’s 12-Factor Apps In .NET.

IMG_20170506_092930Ian starts off my introducing the 12 factors, these are 12 common best practices for building web based applications, especially in the modern cloud-first world.  There’s a website for the 12 factors that details them.  These 12 factors and the website itself were first assembled by the team behind the Heroku cloud application platform as the set of best practices that most of the most successful applications running on their platform had.

Ian states how the 12 factors cover 3 broad categories: Design, Build & Release and Management.  We first look at getting setup with a new project when you’re a new developer on a team.  Setup automation should be declarative.  It should ideally take no more than 10 minutes for a new developer to go from a clean development machine to a working build of the project.  This is frequently not the case for most teams, but we should work to reduce the friction and the time it takes to get developers up and running with the codebase.  Ian also state that a single application, although it may consist of many different individual projects and “moving parts” should remain together in a single source code repository.  Although you’re keeping multiple distinct parts of the application together, you can still easily build and deploy the individual parts independently based upon individual .csproj files.

Next, we talk about application servers.  We should try to avoid application servers if possible.  What is an application server?  Well, it can be many things, but the most obvious one in the world of .NET web applications is Internet Information Server itself.  This is an application server and we often require this for our code to run within it.  IIS has a lot of configuration of it’s own and so this means that our code is then tied to and dependent upon that specifically configured application server.  An alternative to this is have our code self-host, which is entirely possible with ASP.NET Web API and ASP.NET Core.  By self-hosting and running our endpoint of a specific port to differentiate it from other services that may also potentially run on the same server, we ensure our code is far more portable, and platform agnostic.

Ian then talks about backing services.  These are the concerns such as databases, file systems etc. that our application will inevitably need to talk to.  But we need to ensure that we treat them as the potentially ephemeral systems that they are, and therefore accept that such a system may not even have a fixed location.  Using a service discovery service, such as Consul.io, is a good way to remove our application’s dependence on a a required service being in a specific place.

Ian mentions the port and adapters architecture (aka hexagonal architecture) for organising our codebase itself.  He likes this architecture as it’s not only a very clean way to separate concerns, and keep the domain at the heart of the application model, it also works well in the context of a “12-factor” compliant application as the terminology (specifically around the use of “ports”) is similar and consistent.  We then look at performance of our overall application.  Client requests to our front-end website should be responded to in around 200-300 milliseconds.  If, as part of that request, there’s some long-running process that needs to be performed, we should offload that processing to some background task or external service, which can update the web site “out-of-band” when the processing is complete, allowing the front-end website to render the initial page very quickly.

This leads on to talking about ensuring our services can start up very quickly, ideally no more than a second or two, and should shutdown gracefully also.  If we have slow startup, it’s usually because of the need to built complex state, so like the web front-end itself, we should offload that state building to a background task.  If our service absolutely needs that state before it can process incoming requests, we can buffer or queue early request that the service receives immediately after startup until our background initialization is complete.  As an aside, Ian mentions supervisorD as a handy tool to keep our services and processes alive.  One great thing about keeping our startup fast and lean, and our shutdown graceful, we essentially have elastic scaling with that service!

Ian now starts to show us some demo code that demonstrates many of the 12 best practices within the “12-factors”.  He uses the ToDoBackend website as a repository of sample code that frequently follows the 12-factor best practices.  Ian specifically points out the C# ASP.NET Core implementation of ToDoBackend as it was contributed by one of his colleagues.  The entire ToDoBackend website is a great place where many backend frameworks from many different languages and platforms can be compared and contrasted.

Ian talks about the ToDoBackend ASP.NET Core implementation and how it is built to use his own libraries Brighter and Darker.  These are open-source libraries implementing the command dispatcher/processor pattern which allow building applications compliant with the CQRS principle, effectively allowing a more decomposed and decoupled application.  MediatR is a similar library that can be used for the same purposes.  The Brighter library wraps the Polly library which provides essential functionality within a highly decomposed application around improved resilience and transient fault handling, so such things as retries, the circuit breaker pattern, timeouts and other patterns are implemented allowing you to ensure your application, whilst decomposed, remains robust to transient faults and errors – such as services becoming unavailable.

Looking back at the source code, we should ensure that we explicitly isolate and declare our application’s dependencies.  This means not relying on pre-installed frameworks or other library code that already exists on a given machine.  For .NET applications, this primarily means depending on only locally installed NuGet packages and specifically avoiding referencing anything installed in the GAC.  Other dependencies, such as application configuration, and especially configuration that differs from environment to environment – i.e. database connection strings for development, QA and production databases, should be kept within the environment itself rather than existing within the source code.  We often do this with web.config transformations although it’s better if our web.config merely references external environment variables with those variables being defined and maintained elsewhere. Ian mentions HashiCorp’s Vault project and the Spring Boot projects as ways in which you can maintain environment variables and other application “secrets”.  An added bonus of using such a setup is that your application secrets are never added to your source code meaning that, if your code is open source and available on something like GitHub, you won’t be exposing sensitive information to the world!

Finally, we turn to application logging.  Ian states how we should treat all of our application’s logs as event streams.  This means we can view our logs as a stream of aggregated, time-ordered events from the application and should flow continuously so long as the application is running.  Logs from different environments should be kept as similar as possible. If we do both of these things we can store our logs in a system such as Splunk, AWS CloudWatch Logs, LogEntries or some other log storage engine/database.  From here, we can easily manipulate and visualize our application’s behaviour from this log data using something like the ELK stack.

After a few quick Q&A’s, Ian’s session was over and it was time to head back to the main hall where refreshments awaited us.  After a nice break with some more coffee and a few nice biscuits, it was quickly time to head back through the corridors to the rooms where the sessions were held for the next session.  For this next session, I decided to attend the sponsors track, which was Ed Courtenay’s “Growing Configurable Software With Expression<T>

IMG_20170506_105002Ed’s session was a code heavy session which used a single simple application based around filtering a set of geographic data against a number of filters.  The session looked at how configurability of the filtering as well as performance could be improved with each successive iteration refactoring the application to using an ever more “expressive” implementation.  Ed starts by asking what we define as configurable software.  We all agree that software that can change it’s behaviour at runtime is a good definition, and Ed says that we can also think of configurable software as a way to build software from individual building blocks too.  We’re then introduced to the data that we’ll be using within the application, which is the geographic data that’s freely available from geonames.org.

From here, we dive straight into some code.  Ed shows us the IFilter interface that exposes a single method which provide a filter function:

using System;

namespace ExpressionDemo.Common
{
    public interface IFilter
    {
        Func<IGeoDataLocation, bool> GetFilterFunction();
    }
}

The implementation of the IFilter interface is fairly simple and combines a number of actual filter functions that filter the geographic data by various properties, country, classification, minimum population etc.

public Func<IGeoDataLocation, bool> GetFilterFunction()
{
    return location => FilterByCountryCode(location) && FilterByClassification(location)
    && FilterByPopulation(location) && FilterByModificationDate(location);
}

Each of the actual filtering functions is initially implemented as a simple loop, iterating over the set of allowed data (i.e. the allowed country codes) and testing the currently processed geographic data row against the allowed values to determine if the data should be kept (returned true from the filter function) or discarded (returning false):

private bool FilterByCountryCode(IGeoDataLocation location)
{
	if (!_configuration.CountryCodes.Any())
		return true;
	
	foreach (string countryCode in _configuration.CountryCodes) {
		if (location.CountryCode.Equals(countryCode, StringComparison.InvariantCultureIgnoreCase))
			return true;
	}
	return false;
}

Ed shows us his application with all of his filters implemented in such a way and we see the application running over a reduced geographic data set of approximately 10000 records.  We process all of the data in around 280ms, however Ed tells us that it's really a very naive implementation and with only a small improvement in the filter implementations, we can improve this.  From here, we look at the same filters, this time implemented as a Func<T, bool>:

private Func<IGeoDataLocation, bool> CountryCodesExpression()
{
	IEnumerable<string> countryCodes = _configuration.CountryCodes;

	string[] enumerable = countryCodes as string[] ?? countryCodes.ToArray();

	if (enumerable.Any())
		return location => enumerable
			.Any(s => location.CountryCode.Equals(s, StringComparison.InvariantCultureIgnoreCase));

	return location => true;
}

We're doing the exact same logic, but instead of iterating over the allow country codes list, we're being more declarative and simply returning a Func<> which performs the selection/filter for us.  All of the other filters are re-implemented this way, and Ed re-runs his application.  This time, we process the exact same data in around 250ms.  We're starting to see an improvement.  But we can do more.  Instead of returning Func<>'s we can go even further and return an Expression.

We pause looking at the code to discuss Expressions for a moment.  These are really "code-as-data".  This means that we can decompose our algorithm that performs some specific type of filtering and can then express that algorithm as a data structure or tree.  This is a really powerful way or expressing our algorithm and allows our application simply functionally "apply" our input data to the expression rather than having to iterate or loop over lists as we had in the very first implementation.  What this means for our algorithms is increased flexibility in the construction of the algorithm, but also increase performance.  Ed shows us the implementation filter using an Expression:

private Expression<Func<IGeoDataLocation, bool>> CountryCodesExpression()
{
	var countryCodes = _configuration.CountryCodes;

	var enumerable = countryCodes as string[] ?? countryCodes.ToArray();
	if (enumerable.Any())
		return enumerable.Select(CountryCodeExpression).OrElse();

	return location => true;
}

private static Expression<Func<IGeoDataLocation, bool>> CountryCodeExpression(string code)
{
	return location => location.CountryCode.Equals(code, StringComparison.InvariantCultureIgnoreCase);
}

Here, we've simply taken the Func<> implementation and effectively "wrapped" the Func<> in an Expression.  So long as our Func<> is simply a single expression statement, rather than a multi-statement function wrapped in curly brackets, we can trivially turn the Func<> into an

Expression:

public class Foo
{
	public string Name
	{
		get { return this.GetType().Name; }
	}
}

// Func<> implementation
Func<Foo,string> foofunc = foo => foo.Name;
Console.WriteLine(foofunc(new Foo()));

// Same Func<> as an Expression
Expression<Func<Foo,string>> fooexpr = foo => foo.Name;
var func = fooexpr.Compile();
Console.WriteLine(func(new Foo()));

Once again, Ed refactors all of his filters to use Expression functions and we again run the application.  And once again, we see an improvement in performance, this time processing the data in around 240ms.  But, of course, we can do even better!

The next iteration of the application has us again using Expressions, however, instead of merely wrapping the previously defined Func<> in an Expression, this time we're going to write the Expressions from scratch.  At this point, the code does admittedly become less readable as a result of this, however, as an academic exercise in how to construct our filtering using ever more lower-level algorithm implementations, it's very interesting.  Here's the country filter as a hand-rolled Expression:

private Expression CountryCodeExpression(ParameterExpression location, string code)
{
	return StringEqualsExpression(Expression.Property(location, "CountryCode"), Expression.Constant(code));
}

private Expression StringEqualsExpression(Expression expr1, Expression expr2)
{
	return Expression.Call(typeof (string).GetMethod("Equals",
		new[] { typeof (string), typeof (string), typeof (StringComparison) }), expr1, expr2,
		Expression.Constant(StringComparison.InvariantCultureIgnoreCase));
}

Here, we're calling in the Expression API and manually building up the expression tree with our algorithm broken down into operators and operands.  It's far less readable code, and it's highly unlikely you'd actually implement your algorithm in this way, however if you're optimizing for performance, you just might do this as once again we run the application and observe the results.  It is indeed slightly faster than the previous Expression function implementation, taking around 235ms.  Not a huge improvement over the previously implementation, but an improvement nonetheless.

Finally, Ed shows us the final implementation of the filters.  This is the Filter Implementation Bridge.  The code for this is quite complex and hairy so I've not reproduced it here.  It is, however, available in Ed's Github repository which contains the entire code base for the session.

The Filter Implementation Bridge involves building up our filters as Expressions once again, but this time we go one step further and write the resulting code out to a pre-compiled assembly.  This is a separate DLL file, written to disk, which contains a library of all of our filter implementations.  Our application code is then refactored to load the filter functions from the external DLL rather than expecting to find them within the same project.  Because the assembly is pre-compiled and JIT'ed, when we invoke the assembly's functions, we should see another performance improvement.  Sure enough, Ed runs the application after making the necessary changes to implement this and we do indeed see an improvement in performance.  This time processing the data set in around 210ms.

Ed says that although we've looked at performance and the improvements in performance with each successive refactor, the same refactorings have also introduced flexibility in how our filtering is implemented and composed.  By using Expressions, we can easily change the implementation at run-time.  If we have these expressions exported into external libraries/DLL's, we could fairly trivially decide to load a different DLL containing a different function implementation.  Ed uses the example of needing to calculate VAT and the differing ways in which that calculation might need to be implemented depending upon country.  By adhering to a common interface and implementing the algorithm as a generic Expression, we can gain both flexibility and performance in our application code.

After Ed’s session it time for another refreshment break.  We heading back through the corridors to the main hall for more teas, coffees and biscuits.  Again, with only enough time for a quick cup of coffee, it was time again to trek back to the session rooms for the 3rd and final session before lunch.  This one was Sandeep Singh’s “JavaScript Services: Building Single-Page Applications With ASP.NET Core”.

IMG_20170506_120854Sandeep starts his talk with an overview of where Single Page Applications (SPA’s) are currently.  There’s been an explosion of frameworks in the last 5+ years that SPA’s have been around, so there’s an overwhelming choice of JavaScript libraries to choose from for both the main framework (i.e. Angular, Aurelia, React, VueJS etc.) and for supporting JavaScript libraries and build tools.  Due to this, it can be difficult to get an initial project setup and having a good cohesion between the back-end server side code and the front-end client-side code can be challenging.  JavaScriptServices is a part of the ASP.NET Core project and aims to simplify and harmonize development for ASP.NET Core developers building SPA’s.  The project was started by Steve Sanderson, who was the initial creator of the KnockoutJS framework.

IMG_20170506_121721Sandeep talks about some of the difficulties with current SPA applications.  There’s often the slow initial site load whilst a lot of JavaScript is sent to the client browser for processing and eventual display of content, and due to the nature of the application being a single page and using hash-based routing for each page’s URL, and each page being loaded because of running some JavaScript, SEO (Search Engine Optimization) can be difficult.  JavaScriptServices attempts to improve this by combining a set of SPA templates along with some SPA Services, which contains WebPack middleware, server-side pre-rendering of client-side code as well as routing helpers and the ability to “hot swap” modules on the fly.

WebPack is the current major module bundler for JavaScript, allowing many disparate client side asset files, such as JavaScript, images, CSS/LeSS,Sass etc. to be pre-compiled/transpiled and combined in a bundle for efficient delivery to the client.  The WebPack middleware also contains tooling similar to dotnet watch, which is effectively a file system watcher and can rebuild, reload and re-render your web site at development time in response to real-time file changes.  The ability to “hot swap” entire sets of assets, which is effectively a whole part of your application, is a very handy development-time feature.  Sandeep quickly shows us a demo to explain the concept better.  We have a simple SPA page with a button, that when clicked will increment a counter by 1 and display that incrementing count on the page.  If we edit our Angular code which is called in response to the button click, we can change the “increment by 1” code to say “increment by 10” instead.  Normally, you’d have to reload your entire Angular application before this code change was available to the client browser and the built-in debugging tools there.  Now, however, using JavaScript services hot swappable functionality, we can edit the Angular code and see it in action immediately, even continuing the incrementing counter without it resetting back to it’s initial value of 0!

Sandeep continues by discussing another part of JavaScriptServices which is “Universal JavaScript”, also known as “Isomorphic JavaScript”.  This is JavaScript which is shared between the client and the server.  How does JavaScriptServices deal with JavaScript on the server-side?   It uses a .NET library called Microsoft.AspNetCore.NodeServices which is a managed code wrapper around NodeJS.  Using this functionality, the server can effectively run the JavaScript that is part of the client-side code, and send pre-rendered pages to the client.  This is a major factor in being able to improve the initial slow load time of traditional SPA applications.  Another amazing benefit of this functionality is the ability to run an SPA application in the browser even with JavaScript disabled!  Another part of the JavaScriptServices technology that enables this functionality is RoutingServices, which provides handling for static files, and a “fallback” route for when explicit routes aren’t declared.  This means that a request to something like http://myapp.com/orders/ would be rendered on the client-side via the client JavaScript framework code (i.e. Angular) after an AJAX request to retrieve the mark-up/data from the server, however, if client-side routing is unavailable to process such a route (in the case that JavaScript is disabled in the browser, for example) then the request is sent to the server so that the server may render the same page (perhaps requiring client-side JavaScript which is now processed on the server!) on the server before sending the result back to the client via the means of a standard HTTP(s) request/response cycle.

I must admit, I didn’t quite believe this, however, Sandeep quickly setup some more demo code and showed us the application working fine with a JavaScript enabled browser, whereby each page would indeed be rendered by the JavaScript of the client-side SPA.  He then disabled JavaScript in his browser and re-loaded the “home” page of the application and was able to demonstrate still being able to navigate around the pages of the demo application just as smoothly (due to the page’s having been rendered on the server and sent to the client without needing further JavaScript processing to render them).  This was truly mind blowing stuff.  Since the server-side NodeServices library wraps NodeJS on the server, we can write server-side C# code and call out to invoke functionality within any NodeJS package that might be installed.  This means that if have a NodeJS package that, for example, renders some HTML markup and converts it to a PDF, we can generate that PDF with the Node package from C# code, sending the resulting PDF file to the client browser via a standard ASP.NET mechanism.  Sandeep wraps up his talk by sharing some links to find out more about JavaScriptServices and he also provides us with a link to his GitHub repository for the demo code he’s used in his session.

IMG_20170506_130734After Sandeep’s talk, it was time to head back to the main conference area as it was time for lunch.  The food at DDDSW is usually something special, and this year was no exception.  Being in the South West of England, it’s obligatory to have some quintessentially South West food – Pasties!  A choice of Steak or Cheese and Onion pasty along with a packet of crisps, a chocolate bar and a piece of fruit and our lunches were quite a sizable portion of food.  After the attendees queued for the pasties and other treats, we all found a place to sit in the large communal area and ate our delicious lunch.

IMG_20170506_130906The lunch break at DDDSW was an hour and a half and usually has some lightning or grok talks that take place within one of the session rooms over the lunch break.  Due to the sixth form centre not being the largest of venues (some of the session rooms were very full during some sessions and could get quite hot) and the weather outside turning into being a rather nice day and keeping us all very warm indeed, I decided that it would be nice to take a walk around outside to grab some fresh air and also to help work off a bit of the ample lunch I’d just enjoyed.

I decided a brief walk around the grounds of the St. Mary Redcliffe church would be quite pleasant and so off I went, capturing a few photos along the way.  After walking around the grounds, I still had some time to kill so decided to pop down to the river front to wander along there.  After a few more minutes, I stumbled across a nice old fashioned pub and so decided that a cheeky half pint of ale would go down quite nice right about now!  I ordered my ale and sat down in the beer garden overlooking the river watching the world go by as I quaffed!  A lovely afternoon lunch break it was turning out to be.

IMG_20170506_134245-EFFECTSAfter a little while longer it was time to head back to the Redcliffe sixth form centre for the afternoon’s sessions.  I made my way back up the hill, past the church and back to the sixth form centre.  I grabbed a quick coffee in the main conference hall area before heading off down the corridor to the session room for the first afternoon session.  This one was Joel Hammond-Turner’s “Service Discovery With Consul.io for the .NET Developer

IMG_20170506_143139Joel’s session is about the Consul.io software used for service discovery.  Consul.io is a distributed application that allows applications running on a given machine, to query consul for registered services and find out where those services are located either on a local network or on the wider internet.  Consul is fully open source and cross-platform.  Joel tells us the Consul is a service registry – applications can call Consul’s API to register themselves as an available service at a known location – and it’s also service discovery – applications can run a local copy of Consul which will synchronize with other instances of Consul on other machines in the same network to ensure the registry is up-to-date and can query Consul for a given services’ location.  Consul also includes a built-in DNS Server, a Key/Value store and a distributed event engine.  Since Consul requires all of these features itself in order to operate, they are exposed to the users of Consul for their own use.  Joel mentions how his own company, Landmark, is using Consul as part of a large migration project aimed at migrating a 30 year old C++ legacy system to a new event-driven distributed system written in C# and using ASP.NET Core.

Joel talks about distributed systems and they’re often architected from a hardware or infrastructure perspective.  Often, due to each “layer” of the solution requiring multiple servers for failover and redundancy, you’ll need a load balancer to direct requests to this layer to a given server.  If you’ve got web, application and perhaps other layers, this often means load balancers at every layer.  This is not only potentially expensive – especially if your load balancers are separate physical hardware – it also complicates your infrastructure.  Using Consul can help to replace the majority of these load balancers since Consul itself acts as a way to discover a given service and can, when querying Consul’s built-in DNS server, randomize the order in which multiple instances of a given service are reported to a client.  In this way, requests can be routed and balanced between multiple instances of the required service.  This means that load balancing is reduced to a single load balancer at the “border” or entry point of the entire network infrastructure.

Joel proceeds by showing us some demo code.  He starts by downloading Consul.  Consul runs from the command line.  It has a handy built-in developer mode where nothing is persisted to disk, allowing for quick and easy evaluation of the software.  Joel say how Consul command line API is very similar to that of the source control software, Git, where the consul command acts as an introducer to other commands allowing consul to perform some specific function (i.e. consul agent –dev starts the consul agent service running in developer mode).

The great thing with Consul is that you’ll never need to know where on your network the consul service is itself.  It’s always running on localhost!  The idea is that you have Consul running on every machine on localhost.  This way, any application can always access Consul by looking on localhost on the specific Consul port (8300 by default).  Since Consul is distributed, every copy of Consul will synchronize with all others on the network ensuring that the data is shared and registered services are known by each and every running instance of Consul.  The built in DNS Server will provide your consuming app with the ability to get the dynamic IP of the service my using a DNS name of something like [myservicename].services.consul.  If you've got multiple servers for that services (i.e. 3 different IP's that can provide the service) registered, the built-in DNS service will automagically randomize the order in which those IP addresses are returned to DNS queries - automatic load-balancing for the service!

Consuming applications, perhaps written in C#, can use some simple code to both register as a service with Consul and also to query Consul in order to discover services:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory, IApplicationLifetime lifetime)
{
	loggerFactory.AddConsole(Configuration.GetSection("Logging"));
	loggerFactory.AddDebug();
	app.UseMvc();
	var serverAddressFeature = (IServerAddressesFeature)app.ServerFeatures.FirstOrDefault(f => f.Key ==typeof(IServerAddressesFeature)).Value;
	var serverAddress = new Uri(serverAddressFeature.Addresses.First());
	// Register service with consul
	var registration = new AgentServiceRegistration()
	{
		ID = $"webapi-{serverAddress.Port}",
		Name = "webapi",
		Address = $"{serverAddress.Scheme}://{serverAddress.Host}",
		Port = serverAddress.Port,
		Tags = new[] { "Flibble", "Wotsit", "Aardvark" },
		Checks = new AgentServiceCheck[] {new AgentCheckRegistration()
		{
			HTTP = $"{serverAddress.Scheme}://{serverAddress.Host}:{serverAddress.Port}/api/health/status",
			Notes = "Checks /health/status on localhost",
			Timeout = TimeSpan.FromSeconds(3),
			Interval = TimeSpan.FromSeconds(10)
		}}
	};
	var consulClient = app.ApplicationServices.GetRequiredService<IConsulClient>();
	consulClient.Agent.ServiceDeregister(registration.ID).Wait();
	consulClient.Agent.ServiceRegister(registration).Wait();
	lifetime.ApplicationStopping.Register(() =>
	{
		consulClient.Agent.ServiceDeregister(registration.ID).Wait();
	});
}

Consul’s services can be queried based upon an exact service name or can be queries based upon a tag (registered services can assign multiple arbitrary tags upon registration to themselves to aid discovery).

As well as service registration and discovery, Consul also provides service monitoring.  This means that Consul itself will monitor the health of your service so that should one of the instances of a given service become unhealthy or unavailable, Consul will prevent that service’s IP Address from being served up to consuming clients when queried.

Joel now shared with us some tips for using Consul from within a .NET application.  He says to be careful of finding Consul registered services with .NET’s built-in DNS resolver.  The reason for this is that .NET’s DNS Resolver is very heavily cached, and it may serve up stale DNS data that may not be up to date with the data inside Consul’s own DNS service.  Another thing to be aware of is that Consul’s Key/Value store will always store values as byte arrays.  This can sometimes be slightly awkward if we’re storing mostly strings inside, however, it’s trivial to write a wrapper to always convert from a byte array when querying the Key/Value store.  Finally, Joel tells us about a more advanced feature of Consul which is the “watches”.  Theses are effectively events that Consul will fire when Consul’s own data changes.  A good use for this would be to have some code that runs in response to one of these events that can re-write the border load balancer rules to provide you with a fully automatic means of keeping your network infrastructure up-to-date and discoverable.

In wrapping up, Joel shares a few links to his GitHub repositories for both his demo code used during his session and the slides.

IMG_20170506_154655After Joel’s talk it was time for another refreshment break.  This one being the only one of the afternoon was accompanied by another DDD South West tradition – cream teas!  These went down a storm with the conference attendees and there was so many of them that many people were able to have seconds.  There was also some additional fruit, crisps and chocolate bars left over from lunch time, which made this particular break quite gourmand.

After a few cups of coffee which were required to wash down the excellent cream teas and other snack treats, it was time to head back down the corridor to the session rooms for the final session of the day.  This one was Naeem Sarfraz’s “Layers, Abstractions & Spaghetti Code: Revisiting the Onion Architecture”.

Naeem’s talk was a retrospective of the last 10+ years of .NET software development.  Not only a retrospective of his own career, but he posits that it’s quite probably a retrospective of the last 10 years of many people’s careers.  Naeem starts by talking about standards.  He asks us to cast our minds back to when we started new jobs earlier in our career and that perhaps one of the first things that we had to do in starting in the job was to read page after page of “standards” documents for the new team that we’d joined.  Coding standards, architecture standards etc. How times have changed nowadays where the focus is less on onerous documentation but more of expressive code that is easier to understand as well as such practices as pair programming allowing new developers to get up to speed with a new codebase quickly without the need to read large volumes of documentation.

IMG_20170506_160127Naeem then moves on to show us some of the code that he himself wrote when he started his career around 10 years ago.  This is typical of a lot of code that many of us write when we first started developing, and has methods which are called in response to some UI event that has business logic, database access and web code (redirections etc.) all combined in the same method!

Naeem says he quickly realised that this code was not great and that he needed to start separating concerns.  At this time, it seemed that N-Tier became the new buzzword and so newer code written was compliant with an N-Tier architecture.  A UI layer that is decoupled from all other layers and only talks to the Business layer which in turn only talks to the Data layer.  However, this architecture was eventually found to be lacking too.

Naeem stops and asks us about how we decompose our applications.  He says that most of the time, we end up with technical designs taking priority over everything else.  We also often find that we can start to model our software by starting at the persistence (database) layer first, this frequently ends up bleeding into the other layers and affects the design of them.  This is great for us as techies, but it’s not the best approach for the business.  What we need to do is start to model our software on the business domain and leave technical designs for a later part of the modelling process.

Naeem shows us some more code from his past.  This one is slightly more separated, but still has user interface elements specifically designed so that rows of values from a grid of columns are designed in such a way that they can be directly mapped to a DataTable type, allowing easy interaction with the database.  This is another example of a data-centric approach to the entire application design.

IMG_20170506_162337We then move on to look at another way of designing our application architecture.  Naeem tells us how he discovered the 4+1 approach, which has us examining the software we’re building by looking at it from different perspectives.  This helps to provide a better, more balanced view of what we’re seeking to achieve with it’s development.

The next logical step along the architecture path is that of Domain-Driven Design.  This takes the approach of placing the core business domain, described with a ubiquitous language – which is always in the language of the business domain, not the language of any technical implementation – at the very heart of the entire software model.  One popular way of using domain-driven design (DDD) is to adopt an architecture called “Onion architecture”.  This is also known as “Hexagonal architecture” or “Ports and adapters architecture”. 

We look at some more code, this time code that is compliant with DDD.  However, on closer inspection, it really isn’t.  He says how this code adopts all the right names for a DDD compliant implementation, however, it’s only the DDD vernacular that’s been adopted and the code still has a very database-centric, table-per-class type of approach.  The danger here is that we can appear to be following DDD patterns, but we’re really just doing things like implementing excessive interfaces on all repository classes or implementing generic repositories without really separating and abstracting our code from the design of the underlying database.

Finally, we look at some more code, this time written by Greg Young and adopting a CQRS based approach.  The demo code is called SimplestPossibleThing and is available at GitHub.  This code demonstrates a much cleaner and abstracted approach to modelling the code.  By using a CQRS approach, reads and writes of data are separated also, with the code implementing Commands - which are responsible for writing data and Queries – which are responsible for reading data.  Finally, Naeem points us to a talk given by Jimmy Bogard which talks about architecting application in “slices not layers”.  Each feature is developed in isolation from others and the feature development includes a “full stack” of development (domain/business layer, persistence/database layer and user interface layers) isolated to that feature.

IMG_20170506_170157After Naeem’s session was over, it was time for all the attendees to gather back in the main conference hall for the final wrap-up by the organisers and the prize draw.  After thanking the various people involved in making the conference what it is (sponsors, volunteers, organisers etc.) it was time for the prize draw.  There were some good prizes up for grabs, but alas, I wasn’t to be a winner on this occasion.

Once again, the DDDSW conference proved to be a huge success.  I’d had a wonderful day and now just had the long drive home to look forward to.  I didn’t mind, however, as it had been well worth the trip.  Here’s hoping the organisers do it all all over again next year!

DDD North 2016 In Review

IMG_20161001_083505

On Saturday, 1st October 2016 at the University of Leeds, the 6th annual DDD North event was held.  After a great event last year, at the University of Sunderland in the North East, this year’s event was held in Leeds as is now customary for the event to alternate between the two locations each year.

After arriving and collecting my badge, it was a short walk to the communal area for some tea and coffee to start the day.  Unfortunately, there were no bacon butties or Danish pastries this time around, but I’d had a hearty breakfast before setting off on the journey to Leeds anyway.

The first session of the day was Pete Smith’sThe Three Problems with Software Development”.   Pete starts by talking about Conway’s Game of Life and how this game is similar to how software development often works, producing complex behaviours from simple building blocks.  Pete says how his talk will examine some “heuristics” for software development, a sort of “series of steps” for software development best practice.

IMG_20161001_093811Firstly, we look at the three problems themselves.  Problem Number 1 is about dividing and breaking down software components.  Pete tells us that this isn’t just code or software components themselves, but can also relate to people and teams and how they are “broken down”.  Problem Number 2 is how to choose effective tools, processes and approaches to your software development and Problem Number 3 is effective communication.

Looking at problem number 1 in more detail, Pete talks about “reasons for change”.  He says that we should always endeavour to keep things together that need to change together.  He shows an example of two simple web pages of lists of teachers and of students.  The ASP.NET MVC view’s mark-up for both of these view is almost identical.  As developers we’d be very tempted to abstract this into a single MVC view and only alter, using variables, the parts that differ between teachers and students, however, Pete suggests that this is not a good approach.  Fundamentally, teachers and students are not the same thing, so even if the MVC views are almost identical and we have some amount of repetition, it’s good to keep them separate – for example, if we need to add specific abilities to only one of the types of teachers or students, having two separate views makes that much easier.

Nest we look at how we can best identify reasons for change.  We should look at what parts of an application get deployed together, we should also look at the domain and the terminology used – are two different domain entities referred to by the same name?  Or two different names for the same entity?  We should consider the “ripple effect” of change – what something changes, what else has to change?  Finally, the main thing to examine is logic vs intent.  Logic is the code and behaviour and can (and should) be refactored and reused, however, intent should never be reused or refactored (in the previous example, the teachers and students were “intents” as they represent two entirely different things within the domain).

In looking at Problem Number 2 is more details, Pete says that we should promote good change practices.  We should reduce coupling at all layer in the application and the entire software development process, but don’t over-abstract.  We need to have strong test coverage for this when done in the software itself.  Not necessarily 100% test coverage, but a good suite of robust tests.  Pete says that in large organisations we should try to align teams with the reasons for change, however, in smaller organisations, this isn’t something that you’d need to worry about to much as the team will be much smaller anyway.

Next, Pete makes the strong suggestion that MVC controllers that do very little - something generally considered to be a good thing - is “considered harmful”!  What he really means is that blanket advice is considered harmful – controllers should, generally, do as little as they need to but they can be larger if they have good reasons for it.  When we’re making choices, it’s important to remain dogmatic.  Don’t forget about the trade-offs and don’t get taken in by the “new shiny” things.  Most importantly, when receiving advice, always remember the context of the advice.  Use the right tool for the job and always read differing viewpoints for any subject to gain a more rounded understanding of the problem.  Do test the limits of the approaches you take, learn from your mistakes and always focus on providing value.

In examining Problem Number 3, Pete talks about communication and how it’s often impaired due to the choice of language we use in software development.  He talks about using the same names and terminology for essentially different things.  For example, in the context of ASP.NET MVC, we have the notion of a “controller”, however, Angular also has the notion of a “controller” and they’re not the same thing.  Pete also states how terminology like “serverless architecture” is a misnomer as it’s not serverless and how “devops”, “agile” etc. mean different things to different people!  We should always say what we mean and mean what we say! 

Pete talks about how code is communication.  Code is read far more often than it’s written, so therefore code should be optimized for reading.  Pete looks at some forms of communication and states that things like face-to-face communication, pair programming and even perhaps instant messaging are often the better forms of communication rather than things like once-a-day stand-ups and email.  This is because the best forms of communication offer instant feedback.  To improve our code communication, we should eliminate implicit knowledge – such as not refactoring those teacher and student views into one view.  New programmers would expect to be able to find something like a TeacherList.cshtml file within the solution.  Doing this helps to improve discovery, enabling new people to a codebase to get up to speed more quickly.  Finally, Pete repeats his important point of focusing on refactoring the “logic” of the application and not the “intent”.

Most importantly, the best thing we can do to communicate better is to simply listen.  By listening more intently, we ensure that we have the correct information that we need and we can learn from the knowledge and experience of others.

IMG_20161001_103012After Pete’s talk it was time to head back to the communal area for more refreshments.  Tea, coffee, water and cans of coke were all available.  After suitable further watering, it was time to head back to the conference rooms for the next session.  This one was John Stovin’sThinking Functionally”.

John’s talk was held in one of the smaller rooms and was also one of the rooms located farthest away from the communal area.  After the short walk to the room, I made it there with only a few seconds to spare prior to the start of the talk, and it was standing room only in the room!

John starts his talk by mentioning how the leap from OO (Object-Oriented) programming to functional programming is similar to the leap from procedural programming to OO itself.  It’s a big paradigm shift!  John mentions how most of today’s non-functional languages are designed to closely mimic the way the computer itself processes machine code in the “von Neumann” style.  That is to say that programs are really just a big series of steps with conditions and branches along the way.  Functional programming helps in the attempt to break free from this by expressing programs as pure functions – a series of functions, similar to mathematical functions, that take an input and produce an output.

John mentions how, when writing functional programs, it’s important to try your best to keep your functions “pure”.  This means that the function should have no side-effects.  For example a function that writes something to the console is not pure, since the side-effect is the output on the console window.  John states that even throwing an exception from a function is a side-effect in itself!

IMG_20161001_104700

We should also endeavour to always keep our data immutable.  This means that we never try to assign a new value to a variable once it has already been initialized with a value – it’s a single assignment.  Write once but read many.  This helps us to reason about our data better as it improves readability and guarantees thread-safety of the data.  To change data in a functional program, we should perform an atomic “copy-and-modify” operation which creates a copy of the data,  but with our own changes applied.

In F#, most variables are immutable by default, and F# forces you to use a qualifier keyword, mutable, in order to make a variable mutable.  In C#, however, we’re not so lucky.  We can “fake” this, though, by wrapping our data in a type (class) – i.e. a money type, and only accepting values in the type’s constructor, ensuring all properties are either read-only or at least have a private setter.  Class methods that perform some operation on the data should return a whole new instance of the type.

We move on to examine how Functional Programming eradicates nulls.  Variables have to be assigned a value at declaration, and due to not being able to reassign values thanks to immutability, we can’t create a null reference.  We’re stuck with nulls in C#, but we can alleviate that somewhat via the use of such techniques as the Null Object Pattern, or even the use of an Option<T> type.  John continues saying that types are fundamental to F#.  It has real tuple and records – which are “multiplicative” types and are effectively aggregates of other existing types, created by “multiplying” those existing types together – and also discriminating unions which are “additive” types which are created by “summing” other existing types together.  For example, the “multiplicative” types aggregate or combine other types – a Tuple can contain two (or more) other types which are (e.g.) string and int, and a discriminated union, as an “additive” type, can act as the sum total of all of it’s constituent types, so a discriminated union of an int and a boolean can represent all of the possible values of an int AND all of the possible values of a boolean.

John continues with how far too much C# code is written using granular primitive types and that in F#, we’re encouraged to make all of our code based on types.  So, for example, a monetary amount shouldn’t be written as simply a variable of type decimal or float, but should be wrapped in a strong Money type, which can enforce certain constraints around how that type is used.  This is possible in C# and is something we should all try to do more of.  John then shows us some F# code declaring an F# discriminated union:

type Shape =
| Rectangle of float * float
| Circle of float

He states how this is similar to the inheritance we know in C#, but it’s not quite the same.  It’s more like set theory for types!

IMG_20161001_111727John continues by discussing pattern matching.  He says how this is much richer in F# than the kind-of equivalent if() or switch() statements in C# as pattern matching can match based upon the general “shape” of the type.  We’re told how functional programming also favours recursion over loops.  F#’s compiler has tail recursion, where the compiler can re-write the function to pass additional parameters on a recursive call and therefore negate the need to continually add accumulated values to the stack as this helps to prevent stack overflow problems.   Loops are problematic in functional programming as we need a variable for the loop counter which is traditionally re-assigned to with every iteration of the loop – something that we can’t due in F# due to variable immutability.

We continue by looking at lists and sequences.  These are very well used data structures in functional programming.  Lists are recursive structures and are either an empty list or a “head” with a list attached to it.  We iterate over the list by taking the “head” element with each pass – kind of like popping values off a stack.  Next we look at higher-order functions.  These are simply functions that take another function as a parameter, so for example, virtually all of the LINQ extension methods found in C# are higher-order functions (i.,e. .Where, .Select etc.) as these functions take a lambda function to act as a predicate.  F# has List and Seq and the built-in functions for working with these are primarily Filter() and Map().  These are also higher-order functions.  Filter takes a predicate to filter a list and Map takes a Func that transforms each list element from one type to another.

John goes on to mention Reactive Extensions for C# which is a library for composing asynchronous and event-based programs using observable sequences and LINQ-style query operators.  These operators are also higher-order functions and are very “functional” in their architecture.  The Reactive Extensions (Rx) allow composability over events and are great for both UI code and processing data streams.

IMG_20161001_113323John then moves on to discuss Railway-oriented programming.  This is a concept whereby all functions both accept and return a type which is of type Result<TSuccess, TFailure>.  All functions return a “Result<T,K>” type which “contains” a type that indicates either success or failure.  Functions are then composable based upon the types returned, and execution path through code can be modified based upon the resulting outcome of prior functions.

Using such techniques as Railway-oriented programming, along with the other inherent features of F#, such as a lack of null values and immutability means that frequently programs are far easier to reason about in F# than the equivalent program written in C#.  This is especially true for multi-threaded programs.

Finally, John recaps by stating that functional languages give a level of abstraction above the von Neumann architecture of the underlying machine.  This is perhaps one of the major reasons that FP is gaining ground in recent years as machine are now powerful enough to allow this (previously, old-school LISP programs – LISP being one of the very first functional languages originally design back in 1958 - often used purpose built machines to run LISP sufficiently well).  John recommends a few resources for further reading – one is the F# for Fun and Profit website.

After John’s session, it was time for a further break and additional refreshment.  Since John’s session had been in a small room and one which was farthest away from the communal area where the refreshments where, and given that my next session was still in this very same conference room, I decided that I’d stay where I was and await the next session, which was Matteo Emili’s “I Read The Phoenix Project And I Loved It. Now What?”

IMG_20161001_115558

Matteo’s session was all about introducing a “devops” culture into somewhere that doesn’t yet have such a culture.  The Phoenix Project is a development “novel” which tells a story of doing just such a thing.  Matteo starts by mentioning The Phoenix Project book and how it’s a great book.  I  must concur that the book is very good, having read it myself only a few weeks before attending DDD North.  Matteo that asks that, if we’d read the book and would like to implement it’s ideas into our own places of work, we should be very careful.  It’s not so simple, and you can’t change an entire company overnight, but you can start to make small steps towards the end goal.

There are three critical concepts that cause failure and a breakdown in an effective devops culture.  They are bottlenecks, lack of communication and boundaries between departments.  In order to start with the introduction of a devops culture, you need to start “out-of-band”.  This means you’ll need to do something yourself, without the backing of your team, in order to prove a specific hypothesis.  Only when you’re sure something will work should you then introduce the idea to the team.

Starting with bottlenecks, the best way to eliminate them is to automate everything that can be automated.  This reduces human error, is entirely repeatable, and importantly frees up time and people for other, more important, tasks.  Matteo reminds us that we can’t change what we can’t measure and in the loop of “build-measure-learn”, the most important aspect is measure.  We measure by gathering metrics on our automations and our process using logging and telemetry and it’s only from these metrics will we know whether we’re really heading in the right direction and what is really “broken” or needs improvement.  We should gather insights from our users as well by utilising such tools and software as Google Analytics, New Relic, Splunk & HockeyApp for example.  Doing this leads to evidence based management allowing you to use real world numbers to drive change.

IMG_20161001_120736

Matteo explains that resource utilisation is key.  Don’t bring a whole new change management process out of the blue.  Use small changes that generate big wins and this is frequently done “out-of-band”.  One simple thing that can be done to help break down boundaries between areas of the company is a company-wide “stand up”.  Do this once a week, and limit it to 1-2 minutes per functional area.  This greatly improves communication and helps areas understand each other better.  The implementation of automation and the eradication of boundaries form the basis of the road to continuous delivery. 

We should ensure that our applications are properly packaged to allow such automation.  MSDeploy is such a tool to help enable this.  It’s an old technology, having first been released around 2003, but it’s seeing a modern resurgence as it can be heavily utilised with Azure.  Use an infrastructure-as-code approach.  Virtual Machines, Servers, Network topology etc. should all be scripted and version controlled.  This allows automation.  This is fair easy to achieve with cloud-based infrastructure in Azure by using Azure ARM or by using AWS CloudFormation with Amazon Web Services.  Some options for achieving the same thing with on-premise infrastructure are Chef, Puppet or even Powershell Desired State Configuration.  Databases are often neglected with regard to DevOps scenarios, however, by using version control and performing small, incremental changes to database infrastructure and the usage of packages (such as SQL Server’s DACPAC files), this can help to move Database Lifecycle Management into a DevOps/continuous delivery environment.

This brings us to testing.  We should use test suites to ensure our scripts and automation is correct and we must remember the golden rule.  If something is going to fail, it must fail fast.  Automated and manual testing should be used to ensure this.  Accountability is important so tests are critical to the product, and remediation (recovery from failure) should be something that is also automated.

Matteo summarises, start with changing people first, then change the processes and the tools will follow.  Remember, automation, automation, automation!  Finally, tackle the broader technical side and blend individual competencies to the real world requirements of the teams and the overall business.

IMG_20161001_083329After Matteo’s session, it was time for lunch.  All of the attendees reconvened in the communal area where we were treated to a selection of sandwiches and packets of crisps.  After selecting my lunch, I found a vacant spot in the corner of the rather small communal area (which easily filled to capacity once all of the different sessions had finished and all of the conference’s attendees descending on the same space) to eat it.  Since lunch break was 1.5 hours and I’d eaten my lunch within the first 20 minutes, I decided to step outside to grab some fresh air.  It was at this point I remembered a rather excellent little pub just 2 minutes walk down the road from the university venue hosting the conference.  Well, never one to pass up the opportunity of a nice pint of real ale, I heading off down the road to The Pack Horse.

IMG_20161001_133433Once inside, I treated myself to lovely pint of Laguna Seca from a local brewery, Burley Street Brewhouse, and settled down in the quiet pub to enjoy my pint and reflect on the morning’s sessions.  During the lunch break, there are usually some grok talks being held, which are are 10-15 minute long “lightning” talks, which attendees can watch whilst they enjoy their lunch.  Since DDD North was held very close to the previous DDD Reading event (only a matter of a few weeks apart) and since the organisers were largely the same for both events, I had heard that the grok talks would be largely the same as those that had taken place, and which I’d already seen, at DDD Reading only a matter of weeks prior.  Due to this, I decided the pub was a more attractive option over the lunch time break!

After slowly drinking and savouring my pint, it was time to head back to the university’s mechanical engineering department and to the afternoon sessions of DDD North 2016.

The afternoon’s first session was, luckily, in one of the “main” lecture halls of the venue, so I didn’t have too far to travel to take my seat for Bart Read’sHow To Speed Up .NET & SQL Server Apps”.

Bart’s session is al about performance.  Performance of our application’s code and performance of the databases that underlie our application.  Bart starts by introducing himself and states that, amongst other things, he was previously an employee of Red Gate, who make quite a number of SQL Server tools so paying close attention to performance monitoring in something that Bart has done for much of his career.

IMG_20161001_142359He states that we need to start with measurement.  Without this, we can’t possibly know where issues are occurring within our application.  Surprisingly, Bart does say that when starting to measure a database-driven application, many of the worst areas are not within the code itself, and are almost always down in the database layer.  This may be from an errant query or general lack of helpful database additions (better indexes etc.)

Bart mentions the tools that he himself uses as part of his general “toolbox” for performance analysis of an application stack.  ANTS Memory Profiler from Red Gate will help analyse memory consumption issues.  dotMemory from JetBrains is another good choice in the same area.  ANTS Performance Profiler from Red Gate will help analyse the performance of .NET code and monitor it’s CPU consumption.  Again, JetBrains have dotTrace in the same space.  There’s also the lesser known .NET Memory Profiler which is a good option.  For network monitoring, Bart uses Wireshark.  For general testing tools, Bart recommends BlazeMeter (for load testing) and Neustar.

Bart also stresses the importance of the ongoing usage of production monitoring tools.  Services such as New Relic, AppDynamics etc. can provide ongoing metrics for your running application when it’s live in production and are invaluable to understand exactly how your application is behaving in a production environment.

arithabortBart shares a very handy tip regarding usage of SQL Server Management Studio for general debugging of SQL Server queries.  He states that we should always UNCHECK the SET ARITHABORT option inside SSMS’s options menu.  Doing this prevents SQL Server from aborting any queries that perform arithmetic overflows or divide-by-zero operations, meaning that your query will continue to run, giving you a much clearer picture of what the query is actually doing (and how long it takes to run).

From here, Bart shares with us 3 different real-world performance scenarios that he has been involved in, how he went about diagnosing the performance issues and how he fixed them.

The first scenario was helping a client’s customer support team who were struggling as it was taking them 40 seconds to retrieve one of their customer’s details from their system when on a support phone call.  The architecture of the application was a ASP.NET MVC web application in C# and using NHibernate to talk to 2 different SQL Server instances - one server was a primary and the other, a linked server.

Bart started by using ANTS Performance Profiler on the web layer and was able to highlight “hotspots” of slow running code, precisely in the area where the application was calling out to the database.  From here, Bart could see that one of the SQL queries was taking 9 seconds to complete.  After capturing the exact SQL statement that was being sent to the database, it was time to fire up SSMS and use SQL Server Profiler in order to run that SQL statement and gain further insight into why it was taking so long to run.

IMG_20161001_144719After some analysis, Bart discovered that there was a database View on the primary SQL Server that was pulling data from a table on the linked server.  Further, there was no filtering on the data pulled from the linked server, only filtering on the final result set after multiple tables of data had been combined.  This meant that the entire table’s data from the linked server was being pulled across the network to the primary server before any filtering was applied, even though not all of the data was required (the filtering discarded most of it).  To resolve the problem, Bart added a simple WHERE clause to the data that was being selected from the linked server’s table and the execution time of the query went from 9 seconds to only 100 milliseconds!

Bart moves on to tell us about the second scenario.   This one had a very similar application architecture as the first scenario, but the problem here was a creeping increase in memory usage of the application over time.  As the memory increased, so the performance of the application decreased and this was due to the .NET garbage collector having to examine more and more memory in order to determine which objects to garbage collect.  This examination of memory takes time.  For this scenario, Bart used ANTS Memory Profiler to show specific objects that were leaking memory.  After some analysis, he found it was down to a DI (dependency injection) container (in this case, Windsor) having an incorrect lifecycle setting for objects that it created and thus these objects were not cleaned up as efficiently as they should have been.  The resolution was to simply configure the DI container to correctly dispose of unneeded objects and the excessive memory consumption disappeared.

IMG_20161001_150655From here, we move onto the third scenario.  This was a multi-tenanted application where each customer had their own database.  It was an ASP.NET Web application but used a custom ADO layer written in C++ to access the database.  Bart spares us the details, but tells us that the problem was ultimately down to locking, blocking and deadlocking in the database.  Bart uses this to remind us of the various concurrency levels in SQL Server.  There’s object level concurrency and row level concurrency, and when many people are trying to read a row that’s concurrently being written to, deadlocks can occur.  There’s many different solution available for this and one such solution is to use a READ COMMITED SNAPSHOT isolation level on the database.  This uses TempDB to help “scale” the demands against the database, so it’s important that the TempDB is stored on a fast storage medium (a fast SSD drive for example).  The best solution is a more disciplined ordering of object access and this is usually implemented with a Unit Of Work pattern, but Bart tells us that this is difficult to achieve with SQL Server.

Finally, Bart tells us all about scenario number four.  The fundamental problem with this scenario was networking, and more specifically it was down to network latency that was killing the application’s performance.  The application architecture here was not a problem as the application was using Virtual Machines running on VMWare’s vSphere with lots and lots of CPU and Memory to spare.  The SQL Server was running on bare metal to ensure performance of the database layer.  Bart noticed that the problem manifested itself when certain queries were run.  Most of the time, the query would complete in less than 100ms, but occasionally spikes of 500-600ms could be seen when running the exact same query.  To diagnose this issue, Bart used WireShark on both ends of the network, that is to say on the application server where the query originated and on the database server where the data was stored, however, as soon as Wireshark was attached to the network, the performance problem disappeared!

This ultimately turned out to be an incorrect setting on the virtual NIC as Bart could see the the SQL Server was sending results back to the client in only 1ms, however, it was a full 500ms to receive the results when measured from the client (application) side of the network link.  It was disabling the “receive side coalescing” setting that fixed the problem.  Wireshark itself temporarily disables this setting, hence the problem disappearing when Wireshark was attached.

IMG_20161001_152003Bart finally tells us that whilst he’s mostly a server-side performance guy, he’s made some general observations about dealing with client-side performance problems.  These are generally down to size of payload, chattiness of the client-side code, garbage collection in JavaScript code and the execution speed of JavaScript code.  He also reminds us that most performance problems in database-driven applications are usually found at the database layer, and can often be fixed with simple things like adding more relevant indexes, adding stored procedures and utilising efficient cached execution plans.

After Bart’s session, it was time for a final refreshment break before the final session of the day.  For me, the final session was Gary McClean Hall’s “DDD: the God That Failed

Gary starts his session by acknowledging that the title is a little clickbait-ish as his talk started life as a blog post he had previously written.  His talk is all about Domain Driven Design (DDD) and how he implemented DDD when he was working within the games industry.  Gary mentions that he’s the author of the book, “Adaptive Code via C#” and that when we he was working in the game industry, he had worked on the Championship Manager 2008 game.

Gary’s usage of DDD in game development started when there was a split between two companies involved in the Championship Manager series of games.  In the fall out of the split, one company kept the rights to the name, and the other company kept the codebase!  Gary was with the company that had the name but no code and they needed to re-create the game, which had previously been many years in development, in a very compressed timescale of only 12 months.

IMG_20161001_155048Gary starts with a definition of DDD.   It is modelling for complicated domains.  Gary is keen to stress the word “complicated”.  Therefore, we need to be able to identify what exactly is a complicated domain.  In order to help with this, it’s often best to create a “DDD Maturity Model” for the domain in which we’re working.  This is a series of topics which can be further expanded upon with the specifics for that topic (if any) within out domain.  The topics are:

The Domain
Domain Entity Behaviour
Decoupled Domain
Aggregate Roots
Domain Events
CQRS
Bounded Contexts
Polyglotism

By examining the topics in the list above and determining the details for those topics within our own domain, we can evaluate our domain and it’s relative complexity and thus its suitability to be modelled using DDD.

IMG_20161001_155454Gary continues by showing us a typical structure of a Visual Studio solution that purports to follow the Domain Driven Design pattern.  He states that he sees many such solutions configured this way, but it’s not really DDD and usually represent a very anaemic domain.  Anaemic domain models are collections of classes that are usually nothing more than properties with getters and setters, but little to no behaviour.  This type of model is considered an anti-pattern as they offer very low cohesion and high coupling.

If you’re working with such a domain model, you can start to fix things.  Looking for areas of the domain that can benefit from better types rather than using primitive types is a good start.  A classic example of this is a class to represent money.  Having a “money” class allows better control over the scale of the values you’re dealing with and can also encompass currency information as well.  This is preferable to simply passing values around the domain as decimals or ints.

Commonly, in the type of anaemic domain model as detailed above, there are usually repositories associated with entity models within the domain, and it’s usually a single repository per entity model.  This is also considered an anti-pattern as most entities within the domain will be heavily related and thus should be persisted together in the same transaction.  Of course, the persistence of the entity data should be abstracted from the domain model itself.

Gary then touches upon an interested subject, which is the decoupling within a DDD solution.  Our ASP.NET views have ViewModels, our domain has it’s Domain Models and the persistence (data) layer has it’s own data models.  One frequent piece of code plumbing that’s required here is extensive mapping between the various models throughout the layers of the application.  In this regard, Gary suggests reading Mark Seemann’s article, “Is layering worth the mapping?”  In this article, Mark suggests that the best way to avoid having to perform extensive mapping is to move less data around between the layers of our application.  This can sometimes be accomplished, but depending upon the nature of the application, this can be difficult to achieve.

IMG_20161001_160741_1So, looking back at the “repository-per-entity” model again, we’re reminded that it’s usually the wrong approach.  In order to determine the repositories of our domain, we need to examine the domain’s “Aggregate Roots”.  A aggregate root is the top-level object that “contains” additional other child objects within the domain.  So, for example, a class representing a Customer could be an aggregate root.  Here, the customer would have zero, one or more Order classes as children, and each Order class could have one or more OrderItems as children, with each OrderItem linking out to a Product class.  It’s also possible that the Product class could be considered an aggregate root of the domain too, as the product could be the “root” object that is retrieved within the domain, and the various order items across multiple orders for many different customers  could be retrieved as part of the product’s object graph.

To help determine the aggregate roots within our domain, we first need to examine and determine the bounded contexts.  A bounded context is a conceptually related set of objects within the domain that will work together and make sense for some of the domain’s behaviours.  For example, the customer, order, orderitem and product classes above could be considered part of a “Sales” context within the domain.  It’s important to note that a single domain entity can exist in more than one bounded context, and it’s frequently the case that the actually objects within code that represent that domain entity can be entirely different objects and classes from one bounded context to the next.  For example, within the Sales bounded context, it’s possible that only a small subset of the product data is required, therefore the Product class within the Sales bounded context has a lot less properties/data than the Product class in a different bounded context – for example, there could be a “Catalogue” context, with the Product entity as it’s aggregate root, but this Product object is different from the previous one and contains significantly more properties/data.

IMG_20161001_161509The number of different bounded contexts you have within your domain determines the domain’s breadth.  The size of the bounded contexts (i.e. the number of related objects within it) determines the domains depth.  The size of a given bounded context’s depth determines the importance of that area of the domain to the user of the application.

Bounded contexts and the aggregate roots within them will need to communicate with one another in order that behaviour within the domain can be implemented.  It’s important to ensure that aggregate roots and especially bounded contexts are not coupled to each other, so communication is performed using domain events.  Domain events are an event that is raised by one aggregate root or bounded context’s entity that is broadcast to the rest of the domain.  Other entities within other bounded contexts or aggregate roots will subscribe to the domain events that they may be interested in, in order for them to respond to actions and behaviour in other areas of the domain.  Domain events in a .NET application are frequently modelled using the built-in events and delegates functionality of the .NET framework, although there are other options available such as the Reactive Extensions library as well as specific patterns of implementation.

IMG_20161001_161830

One difficult area of most applications, and somewhere where the “pure” DDD model may break down slightly is search.  Many different applications will require the ability to search across data within the domain, and frequently search is seen as a cross-cutting concern as the result data returned can be small amounts of data from many different aggregates and bounded contexts in one amalgamated data set.  One approach that can be used to mitigate this is the CQRS – Command and Query Responsibility Segregation pattern.

Essentially, this pattern states that the models and code that we use to read data does not necessarily have to be the same models and code that we use to write data.  In fact, most of the time, these models and code should be different.  In the case of requiring a search across disparate data within the DDD-modelled domain, it’s absolutely fine to forego the strict DDD model and to create a specific “view” – this could be a database stored procedure or a database view – that retrieves the exact cross-cutting data that you need.  Doing this prevents using the DDD model to specifically create and hydrate entire aggregate roots of object graphs (possibly across multiple different bounded contexts) as this is something that could be a very expensive operation as most of the retrieved data wouldn’t be required.

Gary reminds us that DDD aggregates can still be painful when using a relational database as the persistence storage due to the impedance mismatch of the domain models in code and the tables within the database.  It’s worth examining Document databases or Graph databases as the persistent storage as these can often be a better choice. 

Finally, we learn that DDD is frequently not justified in applications that are largely CRUD based or for applications that make very extensive use of data queries and reports (especially with custom result sets).  Therefore, DDD is mostly appropriate for those applications that have to model a genuinely complex domain with specific and complex domain objects and behaviours and where a DDD approach can deliver real value.

IMG_20161001_165949After Gary’s session was over, it was time for all of the attendees to gather in the largest of the conference rooms for the final wrap-up.  There were only a few prize give-aways on this occasion, and after those were awarded to the lucky attendees who had their feedback forms drawn at random, it was time to thank the great sponsors of the event, without whom there simply wouldn’t be a DDD North.

I’d had a great time at yet another fantastic DDD event, and am already looking forward to the next one!