DDD South West 6 In Review
This past Saturday 25th April 2015 saw the 6th annual DDD South West event, this year being held at the Redcliffe Sixth Form Centre in Bristol.
This was my very first DDD South West event, having travelled south to the two DDD East Anglia events previously, but never to the south west for this one.
I’d travelled down south on the Friday evening before the event, staying in a Premier Inn in Gloucester. This enabled me to only have a relatively short drive on the Saturday to get to Bristol and the DDD South West event. After a restful night’s sleep in Gloucester, I started off on the journey to Bristol, arriving at one of the recommended car parks only a few minutes walk away from the DDDSW venue.
Upon arrival at the venue, I checked myself in and proceeded up the stairs to what is effectively the Sixth Form “common room”. This was the main hall for the DDDSW event and where all the attendees would gather, have teas, coffees & snacks throughout the day.
Well, as is customary, the first order of business is breakfast! Thanks to the generous sponsors of the event, we had ample amounts of tea, coffee and delicious danish pastries for our breakfast! (Surprisingly, these delicious pastries managed to last through from breakfast to the first (and possibly second) tea-break of the day!)
Well, after breakfast there was a brief introduction from the organisers as to the day’s proceedings. All sessions would be held in rooms on the second floor of the building and all breaks, lunch and the final gathering for the customary prize draw would be held in the communal common room. This year’s DDDSW had 4 main tracks of sessions with a further 5th track which was the main sponsors track. This 5th track only had two sessions throughout the day whilst the other 4 had 5 sessions in each.
The first session of the day for me was “Why Service Oriented Architecture?” by Sean Farmar.
Sean starts his talk by mentioning how "small monoliths" of applications can, over time and after many tweaks to functionality, become large monoliths and can become a maintenance nightmare which is both a high risk to the business and can lead to changes that are difficult to make and can have unforeseen side-effects. When we’ve created a large monolith of an application, we’re frequently left with a “big ball of mud”.
Sean talks about one of his first websites that he created back in the early 1990’s. It had around 5000 users, which by the standards of the day was a large number. Both the internet and the web have grown exponentially since then, so 5000 users is very small by today’s standards. Sean states that we can take those numbers and “add two noughts to the end” to get a figure for a large number of users today. Due to this scaling of the user base, our application needs to scale too, but if we start on the path of creating that big ball of mud, we’ll simply create it far quicker today than we’ve ever done in the past.
Sean continues to state that after we learn from our mistakes with the monolithic big ball of mud, we usually move to web services. We break a large monolith into much smaller monoliths, however, these webservices need to then talk both to each other as well to the consumers of the webservice. For example, the sales webservice has to talk to the user webservice which then possibly has to talk to the credit webservice in order to verify that a certain user can place an order of a specific size. However, this creates dependencies between the various web services and each service becomes coupled in some way to one or more other services. This coupling is a bad thing which prevents the individual web services from being able to exist and operate without the other webservices upon which it depends.
From here, we often look to move towards a Service Oriented Architecture (SOA). SOA’s core tenets are geared around reducing this coupling between our services.
Sean mentions the issues with coupling:
- Afferent (dependents) & Efferent (depends on) – These are the things that a given service depends upon and the other services that, in turn, depend upon the first service.
- Temporal (time, RPC) – This is mostly seen in synchronous communications – like when a service performs a remote procedure call (RPC) to another service and has to wait for the response. The time taken to deliver the response is temporal coupling of those services.
- Spatial (deployment, endpoint address) – Sean explains this by talking about having numerous copies of (for example) a database connection string in many places. A change to the database connection string can cause redeployments of complete parts of the system.
After looking at the problems with coupling, Sean moves on to looking at some solutions for coupling: If we use XML (or even JSON) over the wire, along with XSD (or JSON Schema) we can define our messages and the transport of our messages using industry standards allowing full interoperability. To overcome the temporal coupling problems, we should use a publisher/subscriber (pub/sub) communication mechanism. Publishers do not need to know the exact receivers of a message, it’s the subscribers responsibility to listen and respond to messages that it is interested in when the publisher publishes the message. To overcome the spatial issues, we can most often use a central message queue or service bus. This allows publishers and subscribers to communicate with each other without hard references to the location of the publisher or subscriber on the network, they both only need to communicate to the single message bus endpoint. This frees our application code from ever knowing who (or where) we are “talking to” when sending a command or event message to some other service within the system, pushing these issues down to being an infrastructure rather than an application level concern. Usage of a message bus also gives us durability (persistence) of our messages meaning that even if a service is down and unavailable when a particular event is raised, the service can still receive and respond to the event when it becomes available again at a later time.
Sean then shows us a diagram of a typical n-tier architecture system. He mentions how “wide” the diagram is and how each “layer” of the application spans the full extent of that part of the system (i.e. the UI later is a complete layer than contains all of the UI for the entire system). All of these wide horizontal layers are dependent upon the layer above or beneath it.
Within a SOA architecture, we attempt to take this n-tier design and “slice” the diagram vertically. Therefore each of our smaller services each contain all of the layers - a service endpoint, business logic, data access layer and database - each in thin, focused vertical slices for specific focused areas of functionality.
Sean remarks that if we're going to build this kind of system, or modify an existing n-tier system into these vertical slices of services, we must start at the database layer and separate that out. Databases have their own transactions, which in a large monolithic DB can lock the whole DB, locking up the entire system. This must be avoided at all costs.
Sean continues to talk about how our services should be designed. Our services should be very granular. i.e. we shouldn't have an "UpdateUser" method that performs creation and updates of all kinds of properties of a "User" entity, we should have separate "CreateUser", "UpdateUserPassword", "UpdateUserPhoneNumber" methods nstead. The reason is that, during maintenance, constantly extending an "UpdateUser" method will force it to take more and more arguments and parameters and will grow extensively in lines of code as it tries to handle more and more properties of a “user” entity and it thus become unwieldy. A simpler "UpdateUserPassword" is sufficiently granular enough that it'll probably never need to change over its lifetime and will only ever require 1 or 2 arguments/parameters to the method.
Sean then asks how many arguments our methods should take. He says his own rule of thumb for maximum arguments to any method is 2. Once you find yourself needing 3 arguments, it's time to re-think and break up the method and create another new one. By slicing the system vertically we do end up with many many methods, however, each of these methods are very small, very simple and are very specific with individual specific concerns.
Next we look at synchronous vs asynchronous calls. Remote procedure calls (RPC) will usually block and wait as one service waits for a reply from another. This won’t scale in production to millions of users. We should use the pub/sub mechanism which allows for asynchronous messaging allowing services that require data from other services to not have to wait and block while the other service provides the data, it can subscribe to a message queue and be notified of the data when it's ready and available.
Sean goes on to indicate that things like a user’s address can be used by many services, however, it’s all about the context in which that piece of data is used by that service. For this reason it’s ok for our system to have many different representations of, effectively, the same piece of data. For example, to an accounting service, a user’s address is merely a string that gets printed onto a letter or an invoice and it has no further meaning beyond that. However, to a shipping service, the user’s address can and probably will affect things like delivery timescales and shipping costs.
Sean ends his talk by explaining that, whilst a piece of data can be represented in different ways by different parts of the system, only one service ever has control to write that data whereas all other services that may need that data in their own representation will only ever be read-only.
The next session was Richard Dalton’s “Burnout”. This was a fascinating session and is quite an unusual talk to have at these DDD events, albeit a very important talk to have, IMHO. Richard’s session was not about a new technology or method of improving our software development techniques as many of the other sessions at the various DDD events are, but rather this session was about the “slump” that many programmers, especially programmers of a certain age, can often feel. We call this “burnout”.
Richard started by pointing out that developer “burnout” isn’t a sudden “crash-and-burn” explosion that suddenly happens to us, but rather it’s more akin to a candle - a slow burn that gradually, but inevitably, burns away. Richard wanted to talk about how burnout affected him and how it can affect all of us, and importantly, what can we do to overcome the burnout if and when it happens to us. His talk is about “keeping the fire alive” – that motivation that gets you up in the morning and puts a spring in your step to get to work, get productive and achieve things.
Richard starts by briefly running through the agenda of his talk. He says he’ll talk about the feelings of being a bad programmer, and the “slump” that you can feel within your career, he’ll talk about both the symptoms and causes of burnout, discuss our expectations versus the reality of being a software developer along with some anti-patterns and actions.
We’re shown a slide of some quite shocking statistics regarding the attrition rate of programmers. Computer Science graduates were surveyed to see who was still working as a programmer after a certain length of time. After 6 years, the amount of CS graduates still working as a programmer is 57%, however after 20 years, this number is only 19%. It’s clear that the realistic average lifespan of a programmer is perhaps only around 20-30 years.
Richard continues by stating that there’s really no such thing as a “computer programmer” anymore – there no longer a job titled as such. We’re all “software developers” these days and whilst that obviously entails programming of computers, it also entails far more tasks and responsibilities. Richard talks about how his own burnout started and he first felt it was at least partially caused by his job and his then current employer. Although a good and generous employer, they were one of many companies who claimed to be agile, but really only did enough to be able to use the term without really becoming truly agile. He left this company to move to one that really did fully embrace the agile mantra however due to lots of long-standing technical debt issues, agile didn’t really seem to be working for them. Clearly, the first job was not the cause (or at least not the only cause) of Richard’s burnout. He says how every day was a bad day, so much so that he could specifically remember the good days as they were so rare and few and far between.
He felt his work had become both Dull and Overwhelming. This is where the work you do is entirely unexciting with very little sense of accomplishment once performed, but also very overwhelming which was often manifested by taking far longer to accomplish some relatively simple task than should really have been taken, often due to “artificial complexity”. Artificial complexity is the complexity that is not inherent within the system itself, but rather the complexity added by taking shortcuts in the software design in the name of expediency. This accrues technical debt, which if not paid off quickly enough, leads to an unwieldy system which is difficult to maintain. Richard also states how from this, he felt that he simply couldn’t make a difference. His work seemed almost irrelevant in the grand scheme of things and this leads to frustration and procrastination. This all eventually leads to feelings of self-doubt.
Richard continues talking about his own career and it was at this point he moved to Florida in the US where he worked for 1.5 years. This was a massive change, but didn’t really address the burnout and when Richard returned he felt as though the entire industry had moved on significantly in those 1.5 years when he was away, whilst he himself had remained where he was before he went. Richard wondered why he felt like this. The industry had indeed changed in that time and it’s important to know that our industry does change at a very rapid pace. Can we handle that pace of change? Many more developers were turning to the internet and producing blogs of their own and the explosion of quality content for software developers to learn from was staggering. Richard remarks that in a way, we all felt cleverer after reading these blogs full of useful knowledge and information, but we all feel more stupid as we feel that others know far more than we do. What we need to remember is that we’re reading the blogs showing the “best” of each developer, not the worst.
We move on to actually discuss “What is burnout?” Richard states that it really all starts with stress. This stress is often caused by the expectation vs. reality gap – what we feel we should know vs. what we feel we actually do know. Stress then leads to a cognitive decline. The cognitive decline leads to work decline, which then causes further stress. This becomes a vicious circle feeding upon itself, and this all starts long before we really consider that we may becoming burnt out. It can manifest itself as a feeling of being trapped, particularly within our jobs and this leads itself onto feeling fatigued. From here we can become irritable, start to feel self-doubt and become self-critical. This can also lead to feeling overly negative and assuming that things just won’t work even when trying to work at them. Richard uses a phrase that he felt within his own slump - “On good days he thought about changing jobs. On bad days he thought about changing career”! Richard continues by stating that often the Number 1 symptom of not having burnout is thinking that you do indeed have it. If you think you’re suffering from burnout, you probably aren’t but when you do have it, you’ll know.
Now we’re moving on to look at what actually leads to burnout? This often starts with a set of unclear expectations, both in our work life, but in our general life as a software developer. It can also come from having too many responsibilities, sleep and relaxation issues and a feeling of sustained pressure. This often all occurs within the overarching feelings of a weight of expectation versus the reality of what can be achieved.
Richard states that it was this raised expectation of the industry itself (witness the emergence of agile development practices, test-driven development practices and a general maturing of many companies’ development processes and practices in a fairly short space of time) and the disconnect with reality, which Richard felt simply didn’t live up to the expectations that ultimately lead to him feeling a great amount of stress. For Richard, it was specifically around what he felt was a “bad” implementation of agile software development which actually created more pressure and artificial work stress. The implementation of a new development practice that is supposed to improve productivity naturally raises expectations, but when it goes wrong, it can widen the gap between expectation and reality causing ever more stress. He does further mention that this trigger for his own feelings of stress may or may not be what could cause stress in others.
Richard talks about some of the things that we do as software developers that can often contribute to the feelings of burnout or of increasing stress. He discusses how software frameworks – for example the recent explosion of JavaScript frameworks – can lead to an overwhelming amount of choice. Too much choice then often leads to paralysis and Richard shares a link to an interesting video of a TED talk that confirms this. We then move on to discuss software side projects. They’re something that many developers have, but if you’re using a side-project as a means to gain fulfilment when that fulfilment is lacking within your work or professional life, it’s often a false solution. Using side-projects as a means to try out and learn a new technology is great, but they won’t fix underlying fulfilment issues within work. Taking a break from software development is great, however, it’s often only a short-term fix. Like a candle, if there’s plenty of wax left you can extinguish the candle then re-light it later, however, if the candle has burned to nothing, you can’t simply re-ignite the flame. In this case, the short break won’t really help the underlying problem.
Richard proceeds to the final section of his talk and asks “what can we do to combat burnout?” He suggests we must first “keep calm and lower our expectations!”. This doesn’t mean giving up, it means continuing to desire the professionalism within both ourselves and the industry around us, but acknowledging and appreciating the gap that exists between expectation and reality. He suggests we should do less and sleep more. Taking more breaks away from the world of software development and simply “switching off” more often can help recharge those batteries andwe’ll come back feeling a lot better about ourselves and our work. If you do have side-projects, make it just one. Many side-projects is often as a result of starting many things but finishing none. Starting only one thing and seeing it through to the finish is a far better proposition and provides for a far greater sense of accomplishment. Finally, we look at how we can deal with procrastination. Richard suggests one of the best ways to overcome it in work is to pair program.
Finally, Richard states that there’s no shame in burnout. Lots of people suffer from it even if they don’t call it burnout, whenever you have that “slump” of productivity it can be a sign that it’s time to do something about it. Ultimately, though, we each have to find our own way through it and do what works for us to overcome it.
The final talk before lunch was on the sponsor’s track, and was “Native Cross-Platform mobile apps with C# & Xamarin.Forms” by Peter Major. Peter first states his agenda with this talk and that it’s all about Xamarin, Xamarin.Forms and what they both can and can’t do and also when you should use one over the other.
Peter starts by indicating that building mobile apps today is usually split between taking a purely “native” approach – where we code specifically for the target platform and often need multiple teams of developers for each platform we’ll be supporting – versus a “hybrid” approach which often involves using technologies like HTML5 and JavaScript to build a cross-platform application which is then deployed to each specific platform via the use of a “container” (i.e. using tools like phonegap or Apache’s Cordova).
Peter continues by looking at what Xamarin is and what is can do for us. Xamarin allows us to build mobile applications targeting multiple platforms (iOS, Android, Windows Phone) using C# as the language. We can leverage virtually all of the .NET or Mono framework to accomplish this. Xamarin provides “compiled-to-native” code for our target platforms and also provides a native UI for our target platforms too, meaning that the user interface must be designed and implemented using the standard and native design paradigms for each target platform.
Peter then talks about what Xamarin isn’t. It’s not a write-once, run-anywhere UI, and it’s not a replacement for learning about how to design effective UI’s for each of the various target platforms. You’ll still need to know the intricacies for each platform that you’re developing for.
Peter looks at Xamarin.iOS. He states that it’s AOT (Ahead-Of-Time) compiled to an ARM assembly. Our C# source code is pre-compiled to IL which in turn is compiled to a native ARM assembly which contains the MONO framework embedded within it. This allows us as developers to use virtually the full extent of the .NET / Mono framework. Peter then looks at Xamarin.Android. This is slightly different to Xamarin.iOS as it’s still compiled to IL code, but then the IL code is JIT (Just-In-Time) compiled inside of a MONO Virtual Machine within the Android application. It doesn’t run natively inside the Dalvik runtime on Android. Finally, Peter looks at Xamarin.WindowsPhone. This is perhaps the simplest to understand as the C# code is compiled to IL and this IL can run (in a Just-In-Time manner) directly against the Windows Phone’s own runtime.
Peter then looks at whether we can use our favourite SDK’s and NuGet Packages in our mobile apps. Generally, the answer is yes. SDK’s such as Amazon’s Kinesis for example are fully usable, but NuGet packages need to target PCL’s (Portable Class Libraries) if they’re to be used.
Peter asks whether applications built with Xamarin run slower than pure native apps, and the answer is that they generally run at around the same speed. Peter shows some statistics around this however, he does also state that the app will certainly be larger in size than a natively written app. Peter indicates, though, that Xamarin does have a linker and so it will build your app with a cut-down version of the Mono Framework so that it’ll only include those parts of the framework that you’re actually using.
We can use pretty much all C# code and target virtually all of the .NET framework’s classes when using Xamarin with the exception of any dynamic code, so we can’t target the dynamic language runtime or use the dynamic keyword within our code. Because of this, usage of certain standard .NET frameworks such as WCF (Windows Communication Foundation) should be done very carefully as there can often be dynamic types used behind the scenes.
Peter then moves on to talk about the next evolution with Xamarin, Xamarin.Forms. We’re told that Xamarin.Forms is effectively an abstraction layer over the disparate UI’s for the various platforms (iOS, Android, Windows Phone). Without Xamarin.Forms, the UI of our application needs to be designed and developed to be specific for each platform that we’re targeting, even if the application code can be shared, but with Xamarin.Forms the amount of platform specific UI code is massively reduced. It’s important to note that the UI is not completely abstracted away, there's still some amount of specific code per platform, but it's a lot less than when using "standard" Xamarin without Xamarin.Forms.
Developing with Xamarin.Forms is very similar to developing a WPF (Windows Presentation Foundation) application. XAML is used for the UI mark-up, and the premise is that it allows the developer to develop by feature and not by platform. Similarly to WPF, the UI can be built up using code as well as XAML mark-up, for example:
Content = new StackPanel().AddChildren(new Button() { Content = "Normal" });
Xamarin.Forms works by taking our mark-up that defines the placement of Xamarin.Forms specific “controls” and user interface elements and converting them using a platform-specific “renderer” to a native platform control. By default, using the standard build-in renderers means that our apps won’t necessarily “look" like the native apps you’d find on the platform. You can customize specific UI elements (i.e. a button control) for all platforms, or you can make the customisation platform specific. This is achieved with a custom Renderer class that inherits from the EntryRenderer and adds the required customisations that are specific to the platform that is being targeted.
Peter continues to tell us that Xamarin.Forms apps are best developed using the MVVM pattern. MVVM is Model-View-ViewModel and allows a good separation of concerns when developing applications, keeping the application code separate from the user interface code. This mirrors the best-practice for development of WPF applications. Peter also highlights the fact that most of the built-in controls will provide two-way data binding right out of the box. Xamarin.Forms has "attached properties" and triggers. You can "watch" a specific property on a UI element and in response to changes to the property, you can alter other properties on other UI elements. This provides a nice and clean way to effectively achieve the same functionality as the much old (and more verbose) INotifyPropertyChanged event pattern provides.
Peter proceeds to talk about how he performs testing of his Xamarin and Xamarin.Forms apps. He says he doesn’t do much unit testing, but performs extensive behavioural testing of the complete application instead. For this, he recommends using Xamarin’s own Calabash framework for this.
Peter continues by explaining how Xamarin.Forms mark-up contains built-in simple behaviours so, for example, you can check a textbox's input is numeric without needing to write your own code-behind methods to perform this functionality. It can be as simple as using mark-up similar to this:
<Entry Placeholder="Sample">
<Entry.Behaviors>
<Entry.NumericTextboxBehaviour>
</Entry.Behaviors>
</Entry>
Peter remarks about speed of Xamarin.Forms developed apps and concludes that they are definitely slower than either native apps or even normal Xamarin developed apps. This is, unfortunately, the trade-off for the improved productivity in development.
Finally, Peter concludes his talk by summarising his views on Xamarin.Forms. The good: One UI Layout and very customizable although this customization does come with a fair amount of initial investment to get platform-specific customisations looking good. The bad: Xamarin.Forms does still contain some bugs which can be a development burden. There’s no XAML “designer” like there is for WPF apps – it all has to be written in a basic mark-up editor. Peter also states how the built-in Xamarin.Forms renderers can contain some internal code that is difficult to override, thus limiting the level of customization in certain circumstances. Finally, he states that Xamarin.Forms is not open source, which could be a deciding factor for adoption by some developers.
After Peter’s talk it was time for lunch! Lunch at DDDSW was very fitting for the location in which we were in, the South-West of England. As a result, lunch consisted of a rather large pasty of which we could choose between Steak or Cheese & Onion varieties, along with a packet of crisps, and a piece of fruit (a choice of apples, bananas or oranges) along with more tea and coffee! I must say, this was a very nice touch – especially having some substantial hot food and certainly made a difference from a lot of the food that is usually served for lunch at the various DDD events (which is generally a sandwich with no hot food options available).
After scoffing my way through the large pasty, my crisps and the fruit – after which I was suitably satiated – I popped outside the building to make a quick phone call and enjoy some of the now pleasant and sunny weather that had overcome Bristol.
After a pleasant stroll around outdoors during which I was able to work off at least a few of the calories I’d just consumed, I headed back towards the Redcliffe Sixth Form Centre for the two remaining sessions of the afternoon.
I headed back inside and headed up the stairs to the session rooms to find the next session. This one, similar to the first of the morning was all about Service Oriented Architecture and designing distributed applications.
So the first of the afternoon’s sessions was “Introduction to NServiceBus and the Particular Platform” by Mauro Servienti. Mauro’s talk was to be an introduction to designing and building distributed applications with a SOA (Service Oriented Architecture) and how we can use a platform like NServiceBus to easily enable that architecture.
Mauro first starts with his agenda for the talk. He’ll explain what SOA is all about, then he’ll move on to discuss long running workflows in a distributed system and how state can be used within. Finally, he’ll look at asynchronous monitoring of asynchronous processes for those times when something may go wrong and allow us to see where and when it did.
Mauro starts by explaining the core tenets of NServiceBus. Within NServiceBus, all boundaries are explicit. Services are constrained and cannot share things between them. Services can share schema and a contract but never classes. Services are also autonomous, and service compatibility is based solely upon policy.
NServiceBus is built around messages. Messages are divided into two types, commands and events. Each messages is an atomic piece of information and is used to drive the system forward in some way. Commands are imperative messages and are directed to a well known receiver. The receiver is expected (but not compelled) to act upon the command. Events are also messages that are an immutable representation of something that has already happened. They are directed to anyone that is interested. Commands and events are messages with a semantic meaning and NServiceBus enforces the semantic of commands and events - it prevents trying to broadcast a Command message to many different, possibly unknown, subscribers and enforces this kind of “fire-and-forget” publishing only to Event messages.
We’re told about the two major messaging patterns. The first is request and response. Within the request/response pattern, a message is sent to a known destination - the sender knows the receiver perfectly but the receiver doesn't necessarily know the sender. Here, there is coupling between the sender and the receiver. The other major message pattern is publish and subscribe (commonly referred to as pub/sub). This pattern has constituent parts of the system become “actors”, and each “actor” in the system can act on some message that is received. Command messages are created and every command also raises an event message to indicate that the command was requested. These event messages are published and subscribers to the event can subscribe and receive these events without having to be known to the command generator. Events are broadcast to anyone interested and subscribers can subscribe, listen and act on the event, or not act on the event. Within a pub/sub system, there is much less coupling between the system’s constituent parts, and the little coupling that exists is inverted, that is, the subscriber knows where the publish is, not the other way round.
In a pub/sub pattern, versioning is the responsibility of the publisher. The publisher can publish multiple versions of the same event each time an event is published. This means that we can have numerous subscribers, each of which can be listening for, and acting upon different versions of the same event message. As a developer using NServiceBus, your job is primarily to write message handlers to handle the various messages passing around the system. Handlers must be stateless. This helps scalability as well as concurrency. Handlers live inside an “Endpoint” and are hosted somewhere within the system. Handlers are grouped by "services" which is a logical concept within the business domain (i.e. shipping, accounting etc.). Services are hosted within Endpoints, and Endpoint instances run on a Windows machine, usually as a Windows Service.
NServiceBus messages are simply classes. They must be serializable to be sent over the wire. NServiceBus messages are generally stored and processed within memory, but can be made durable so that if a subscriber fails and is unavailable (for example, the machine has crashed or gone down) these messages can be retrieved from persistent storage once the machine is back online.
NServiceBus message handlers are also simply classes, which implement the IHandleMessages generic interface like so:
public class MyMessageHandler : IHandleMessages<MyMessage>
{
}
So here we have a class defined to handle messages implemented by the class MyMessage.
NServiceBus endpoints are defined within either the app.config
or the web.config
files within the solution:
<UnicastBusConfig>
<MessageEndpointMappings>
<add Assembly="MyMessages" Endpoint="MyMessagesEndpoint" />
</MessageEndpointMappings>
</UnicastBusConfig>
Such configuration settings are only required on the Sender of the message. There is no need to configure anything on the message receiver.
NServiceBus has a BusConfiguration class. You use it to define which messages are defined as commands and which are defined as events. This is easily performed with code such as the following:
var cfg = new BusConfiguration();
cfg.UsePersistence<InMemoryPersistence>();
cfg.Conventions()
.DefiningCommandsAs( t => t.Namespace != null && t.Namespace.EndsWith( ".Commands" ) )
.DefiningEventsAs( t => t.Namespace != null && t.Namespace.EndsWith( ".Events" ) );
using ( var bus = Bus.Create( cfg ).Start() )
{
Console.Read();
}
Here, we’re declaring that the Bus will use in-memory persistence (rather than any disk-based persistence of messages), and we’re saying that all of our command messages are defined within a namespace that ends with the string “.Commands” and that all of our event messages are defined within a namespace ending with the string “.Events”.
Mauro then shows all of this theory with some code samples. He has an extensive set of samples that show all virtually all aspects of NServiceBus and this solution is freely available on GitHub at the following URL: https://github.com/mauroservienti/NServiceBus.Samples
Mauro goes on to state that when sending and recieving commands, the subscriber will usually work with concrete classes when handling messages for that specific command, however, when sending or receiving event messages, the subscriber will work with interfaces rather than concrete classes. This is a best practice and helps greatly with versioning.
NServiceBus allows you to use your own persistence store for persisting messages. A typical store used is RavenDB, but virtually anything can be used. There's only two interfaces that need to be implemented by a storage provider, and many well-known databases and storage mechanisms (RavenDB, NHibernate/SQL Server etc.) have integrations with NServiceBus such that they can be used as persistent storage. NServiceBus can also use third-party message queues. MSMQ, RabbitMQ, SQL Server, Azure ServiceBus etc. can all be used. By default NServiceBus uses the built-in Windows MSMQ for the messaging.
Mauro goes on to talk about state. He asks, “What if you need state during a long-running workflow of message passing?” He explains how NServiceBus accomplishes this using “Sagas”. Sagas are durable, stateful and reliable, and they guarantee state persistence across message handling. They can express message and state correlation and they empower "timeouts" within the system to make decisions in an asynchronous world – i.e. they allow a command publisher to be notified after a specific "timeout" of elapsed time as to whether the command did what was expected or if something went wrong. Mauro demonstrates this using his NServiceBus sample code.
Mauro explains how the business endpoints are responsible for storing the business state used at each stage (or step) of a saga. The original message that kicks off a saga only stores the "orchestration" state of the saga (for example, an Order management service could start a saga that uses an Order Creation service, a Warehouse Service and a Shipping service that creates an order, picks the items to pack and then finally ships them).
The final part of Mauro’s talk is about monitoring and how we can monitor the various messages and flow through all of the messages passing around an inherently asynchronous system. He states that auditing is a key feature, and that this is required when we have many asynchronous messages floating around a system in a disparate fashion. NServiceBus provides some "behind-the-scenes" part of the software called "ServiceControl". ServiceControl sits in the background of all components within a system that are publishing or subscribing to NServiceBus messages and it keeps it's own copy of all messages sent and received within that entire system. It therefore allows us to have a single place where we can get a complete overview of all of the messages from the entire system along with their current state.
The company behind NServiceBus also provides separate software called “ServiceInsight”, which Mauro quickly demonstrates to us showing how it provides a holistic overview and monitoring of the entire message passing process and the instantiation and individual steps of long-running sagas. It displays all of this data in a user interface that looks not dissimilar to a SSIS (SQL Server Integration Service) workflow diagram.
Mauro states that handling asynchronous messages can be hard. In a system built with many disparate messages, we cannot ever afford to lose a single message. To prevent message loss, Mauro says that we should never use try/catch blocks within our business code. He states that NServiceBus will automatically "add" this kind error handling within the creation, generation and sending of messages. We need to consider transient failures as well as business errors. NServiceBus will perform it’s own retries for transient failures of messages but business errors must be handled by our own code. Eventually, transient errors in sent messages that fail to be delivered after the configured amount of maximum retries are placed into a special error message queue by NServiceBus itself, and this allows us to handle these failed messages in this queue as special cases. To this end, Particular Software also have a separate piece of software called "ServicePulse" which allows monitoring of the entire the infrastructure. This includes all message endpoints to see if they’re up and available to send/receive messages and well as full monitoring of the failed message queue.
After Mauro’s talk it was time for another break. Unlike the earlier breaks throughout the day, this one was a bit special. As well as the usual teas and coffees that were available all day long, this break treated all of the attendees to some lovely cream teas! This was a very pleasant surprise and ensured that all conference attendees were incredibly well-fed throughout the entire conference. Kudos to the organisers, and specifically the sponsors who allowed all this to be possible.
After our lovely break with the coffee and cream teas, it was on to the second session of the afternoon and indeed, the final session of the whole DDD event. The final session was entitled “Monoliths to Microservices : A Journey”, presented by Sam Elamin.
Sam works for Just Eat, and his talk is all about how he has gone on a journey within his work to move from large, monolithic applications to re-implementing the required functionality in a more leaner, distributed system composed largely of micro-services.
Sam firsts mentions the motivation behind his talk: Failure. He describes how we learn from our failures, and states that we need to talk about our failures more as it’s only from failure that we can actually really improve.
He asks, “Why do we build monoliths?” As developers, we know it will become painful over time but we build these monolithic systems because we’re building a system very quickly in order to ship it fast. People then use our system and we, over time, add more and more features into it. We very rarely, if ever, get the opportunity to go back and break things down into better structured code and implement a better architecture. Wanting to spend time performing such work is often a very hard sell to the business as we’re talking to them about a pain that they don’t feel. It’s only the developers who feel the pain during maintenance work.
Sam then states that it’s not all a bed of roses if we break down our systems into smaller parts. Breaking down a monolithic application into smaller components reduces the complexity of each individual component but that complexity isn’t removed from the system. It’s moved from within the the individual components to the interactions between the components.
Sam shares a definition of "What is a microservice?" He says that Greg Young once said, "It’s anything you can rewrite in a week". He states that a micro service should be a "business context", i.e. a single business responsibility and discrete piece of business functionality.
But how do we start to move a monolithic application to a smaller, microservices-based application? Well, Sam tells us that he himself started with DDD (Domain Driven Design) for the design of the application and to determine bounded contexts – which are the distinct areas of services or functionality within the system. These boundaries would then communicate, as the rest of the system communicated, with messages in a pub/sub (Publisher/Subscriber) mechanism, and each conceptual part of the system was entirely encapsulated by an interface – all other parts of the system could only communicate through this interface.
Sam then talks about something that they hadn’t actually considered when the first started on the journey: Race Hazards. Race Hazards, or Race Conditions as they can also be known, within a distributed message-based architecture are when there are failures in the system due to messages being lost or being recieved out of order and the inability of the system to deal with this. Testing for these kind of failures is hard as asynchronous messages can be difficult to test by their very nature.
Along the journey, Sam discovered that things weren’t proceeding as well as expected. The boundaries within the system were unclear and there was no clear ownership of each bounded context within the business. This is something that is really needed in order for each context to be accurately defined and developed. It’s also really important to get a good ubiquitous language - which is a language and way of talking about the system that is structured around the domain model and used by all team members to connect all the activities of the team with the software - correct so that time and effort is not wasted trying to translate between code "language" and domain language.
Sam mentioned how the teams’ overly strict code review process actually slowed them down. He says that Code Reviews are usually used to address the symptom rather than the underlying problem which is not having good enough tests of the software, services and components. He says this also applies to ensuring the system has the necessary amount of monitoring, auditing and metrics implemented within to ensure speedy diagnosis of problems.
Sam talks about how, in a distributed system of many micro-services, there can be a lot of data duplication. One area of the system can deal with it’s own definition of a “customer”, whilst another area of the system deals with it’s own definition of that same “customer” data. He says that businesses fear things like data duplication, but that it really shouldn't matter in a distributed system and it's often actually a good thing – this is frequently seen in systems that implement CQRS patterns, eventual consistency and correct separation of concerns and contexts due to implementation of DDD. Sam states that, for him, code decoupling is vastly favourable to code duplication – if you have to duplicate some code in two different areas of the system in order to correctly provide a decoupled environment, then that’s perfectly acceptable and that introducing coupling simply to avoid code duplication is bad.
He further states that business monitoring (in production monitoring of the running application and infrastructure) is also favourable to acceptance tests. Continual monitoring of the entire production system is the most useful set of metrics for the business, and that with metrics comes freedom. You can improve the system only when you know what bits you can easily replace, and only when you know what bits actually need to be replaced for the right reasons (i.e. replacing one component due to low performance where the business monitoring has identified that this low-performance component is a genuine system bottleneck). Specifically, business monitoring can provide great insights not just into the system’s performance but also the businesses performance and trends, too. For example, monitoring can surface data such as spikes in usage. From here we can implement alerts based upon known metrics – i.e. We know we get around X number of orders between 6pm and 10pm on a Friday night, if this number drops by Y%, then send an alert.
Sam talks about "EventStorming" (a phrase coined by Alberto Brandolini) with the business/domain experts. He says he would get them all together in a room and talk about the various “events” and the various “commands” that exist within the business domain, but whilst avoiding any vocabulary that is technical. All language used is expressed within the context of the business domain. (i.e. Order comes in, Product is shipped etc). He states that using Event Storming really helped to move the development of system forward and really helped to define both the correct boundaries of the domain contexts, and helped to define the functionality that each separate service within the system would provide.
Finally, Sam says the downside of moving to microservices are that it's a very time-consuming approach and can be very expensive (both in terms of financial cost and time cost) to define the system, the bounded contexts and the individual commands and events of the system. Despite this, it’s a great approach and using it within his own work, Sam has found that it’s provided the developers within his company with a reliable, scalable and maintainable system, and most importantly it’s provided the business with a system that supports their business needs both now and into the future.
After Sam’s session was over, we all reconvened in the common room and communal hall for the final section of the day. This was the prize-draw and final wrap-up.
The organisers first thanked the very generous sponsors of the event as without them the event simply would not have happened. Moreover, we wouldn’t have been anywhere nearly as well fed as we were!
There were a number of prize draws, and the first batch was a prize from each of the in-house sponsors who had been exhibiting at the event. Prizes here ranged from a ticket to the next NDC conference to a Raspberry Pi kit.
After the in-house sponsors had given away their individual prizes, there was a “main” prize draw based upon winners drawn randomly from the event feedback that was provided about each session by each conference attendee. Amongst the prizes were iPad mini’s, a Nexus 9 tablet, technical books, laser pens and a myriad of software licenses. I sat as the winner’s names were read out, watching as each person as called and the iPad’s and Nexus 9 were claimed by the first few people who were drawn as a winner. Eventually, my own name was read out! I was very happy and went up to the desk to claim my prize. Unfortunately, the iPad’s and Nexus 9 were already gone, but I managed to get myself a license for PostSharp Ultimate
After this, the day’s event was over. There was a customary geek dinner that was to take place at a local Tapas restaurant later in the evening, however, I had a long drive home from Bristol back to the North-West ahead of me so I was unable to attend the geek dinner after-event.
So, my first DDD South-West was over, and I must say it was an excellent event. Very well run and organised by the organisers and the staff of the venue and of course, made possible by the fantastic sponsors. I’d had a really great day and I can’t wait for next year’s DDDSW event!