DDD East Midlands 2019 In Review
On a rather wet Saturday on the 26th October 2019, the very first inaugural DDD East Midlands was held at the Nottingham Conference Centre in Nottingham. This event is a new addition to the roster of DDD (DeveloperDeveloperDeveloper) events that are held annually around the country and was organised primarily by Jessica White and Moreton Brockley.
After travelling in the early morning from the North-West down to Nottingham, I arrived at approximately 8:30am and parked my car in one of the car parks suggested by the event organisers and located only a few short minutes walk from the venue. After parking up and a rather brisk walk to the venue to attempt to avoid as much of the rain as possible, I made it to the venue entrance and the registration desk.
DDD East Midlands, being one of the newer DDD events to be created, has adopted a very progressive and inclusive approach towards their conference and its attendees. Upon signing in, we were able to select from either a green coloured lanyard or a red one, to indicate whether we would mind having our photograph taken or not as well as being able to specify our personal pronouns on our lanyards. All very nice touches that puts the focus on the attendees and ensures that their experience is a great one.
After having my ticket scanned, collecting my chosen lanyard and filling out the name fields I was able to collect a swag bag containing some goodies (I found out later that there were some cool stickers, a screen cleaning cloth and even a lovingly printed brochure for the event itself!) and head off to the communal area where there was tea, coffee and some snacks being served. After grabbing a coffee and some biscuits during which I managed to meet up with some friends, it was time to gather in the main hall for the introductions for the day.
The organisers thanks us all for attending and gave us some important information for the day, such as WiFi passwords, how to enter the various competitions that were running through the day, the code of conduct and more. DDD East Midlands had three tracks of talks throughout the day, but unlike most other DDD events, the first session was a singular "keynote" session with no other sessions taking place at the same time. The keynote began in the same, large hall immediately after the introductions had been completed and was Dylan Beattie's The Art Of Code.
The very beginning of Dylan's talk shows an animation on the big screen of the title of the talk being drawn out, line by line, by a plotter written in the LOGO language. Dylan's starts by stating that whilst most other conference talks are all about very useful patterns and practices for various aspects of software development, his talk is going to be all about code that is useless! Dylan's talk is about art created from code. He mentions a famous quote from Oscar Wilde who says that "All art is quite useless", but Dylan prefers the quote from Douglas Adams who says that "The function of art is to hold a mirror up to nature". Dylan says that in order to achieve this, we first needed to invent the mirror. And with the technology we have available today, we have all kinds of weird and wonderful mirrors that allow us to see things around us that we've never seen before such as the eye of a beetle or a snowflake under intense magnification.
We move on to look at how modern computers have allowed us to also see things never previously seen, mostly in the world of numbers and mathematics. We first start to look at the early Conway's Game Of Life which is an early cellular automaton game. It's a game with no players and only 4 simple rules, but Conway's Game of Life has fascinated people for decades with more and more people continually trying to find meaning and patterns in the output. Originally, the Game Of Life was played on graph paper, with people drawing the shapes formed from each successive generation, but it's since we've been able to automate it on computers that more interesting experiments with the game have been able to occur. Certain patterns were soon discovered, one is the Glider Gun that bounces back and forth on the grid and emits an infinite stream of Glider patterns. There's also a pattern known as an Eater. This pattern will absorb the Gliders if it's placed in their way. In this, we have the ability to generate and destroy "signals". This equates to a logic gate - an ability to determine AND, OR, NOT etc. and from the basis of logic gates, we can build circuits. From circuits we can build computers, and from computers we can build programs. Dylan proves this with a slide that shows the visualisation of a simulation of Conway's Game Of Life, written using Conway's Game of Life itself!
Dylan moves on to look at something called Chaos Theory which a specific branch of mathematics that deals with the behaviour of dynamic systems that are highly sensitive and can give wildly different output based on their input data. Chaos Theory lead on to complex arithmetic, which deals with imaginary numbers that assume we can square a given number and produce a negative result. By performing further arithmetic on our imaginary numbers and by combining them in various ways with real numbers, we can create a series of interesting numeric results. From these numbers, the series of which can differ wildly based upon tiny variations in the input numbers, we can plot them on a graph. Some series of numbers will create very different and interesting patterns. But it was the work of Beniot Mandelbrot, whilst working as a research fellow at IBM, that gave rise to what most of us know as fractal geometry, and specifically the Mandelbot Set. The most interesting thing about this rendered image and the mathematics behind it is that there is no end to the level of detail contained within the image and thus the series of numbers that produce the image. We can continue to zoom in, indefinitely.
There's a beauty to this that was simply not available to mankind until we'd invented imaginary numbers and subsequently the powerful computers we needed to render the visualisation of the mathematics, however, the result wasn't invented. It was discovered.
Dylan then talks about the Deep Dreaming project that was started at Google in 2015. It's based on Artificial Intelligence that attempts to find specific things inside images, but with Deep Dreaming, we're able to tell the computer to find things that we know aren't there. The result of this is some very strange, but bizarrely beautiful looking images. Dylan says that the development of such tools will continue into the future and will enable real artists to create some wonderful new pieces of art in ways that no one could have previously imagined.
We move on to look at actual code that could be considered "artful". We start this by looking at the International Obfuscated C Code Contest which, each year, invites people to submit some of the most interesting code that is possible to create. Dylan shows some code whose source code listing looks like some ASCII art, which when compiled and run, shows a flappy bird demo in the console. We also look at code that produces a Mandelbrot Set and whose source code is formatted to look exactly like a Mandelbrot Set. From here, we start to look at Quines. These are programs whose sole purpose is to emit their own source code. Some languages perform better than others for achieving this, but Dylan shows us an example in C# - one of the languages that's not best suited to this but is possible to achieve:
class Program {
static void Main() {
var s = @"class Program {{
static void Main() {{
var s = {0}{1}{0};
System.Console.WriteLine(s, (char)34, s);
}}
}}";
System.Console.WriteLine(s, (char)34, s);
}
}
We also see a JavaScript version of the same thing, which is significantly easier to perform in that language:
(function f() {
return '(' + f.toString() + ')();';
})();
This concept has been extended all the way to creating a Quine in HTML. That is, a web page that output it's own "source" in valid HTML. We look at a final Quine which is known as an Uroboros Quine. This is a program that is written in the Ruby language, but when run outputs code in the Rust language, which when run outputs code in the Scala language and so on and so forth. For no less than 128 different programming languages.
Dylan moves on to look at some of the more esoteric languages that exist in the world of programming. We first look at a language called Shakespeare which is specifically designed so that all source code resembles a Shakespeare play. There's another esoteric language called Whitespace. It's a language designed to ignore all characters except whitespace characters (i.e. spaces, tabs etc.). It's almost impossible to view it's source code unless your editor allows highlighting of whitespace.
From here, we move on to examine code as performance. Here, we look at a language and tool called Sonic Pi. It allows creating code that outputs music. What's amazing about Sonic Pi, is that the generation of music is in real-time. You can write code, highlight the code and elect to run it, creating the music output generated from that code. This music could be coded in such a way as to continue looping and whilst that it happening, you can be writing new code that can be invoked to create further musical output that plays in harmony with the original output. Such possibilities as this have given rise to a whole new type of musical genre known as Algorave which is the performance of music via program code, usually done in a completely live and improvised manner.
Dylan finishes up his talk by talking about his own contribution to the world of esoteric programming languages. Based upon the frequently requested quality for programmers from many recruitment agency job adverts - that of being a "rockstar" developer. The intention behind the usage of such words in a job advert is to attract the very best talent similar to that of a "rockstar" musician, but Dylan decided to respond to this in an effort to confuse such recruiters by creating an actual programming language called Rockstar. The whole idea behind the language is to express real programs in the style of 1980's rock power-ballad lyrics. Dylan explains how the language works by altering certain aspects that we'd expect to find in more "normal" languages into expressions that are lyrical and which can be interpreted as genuine programming constructs such as numerics, mathematics and more. Dylan shows the classic FizzBuzz program in the Rockstar language:
Midnight takes your heart and your soul
While your heart is as high as your soul
Put your heart without your soul into your heart
Give back your heart
Desire is a lovestruck ladykiller
My world is nothing
Fire is ice
Hate is water
Until my world is Desire,
Build my world up
If Midnight taking my world, Fire is nothing and Midnight taking my world, Hate is nothing
Shout "FizzBuzz!"
Take it to the top
If Midnight taking my world, Fire is nothing
Shout "Fizz!"
Take it to the top
If Midnight taking my world, Hate is nothing
Say "Buzz!"
Take it to the top
Whisper my world
To wrap up, Dylan says that his talk has covered computer programs that create pictures and art, computer programs are are pictures and art and also computer programs that create music, but he wanted to finish with a computer program that was also a song. For this Dylan gets out his guitar and performs the Rockstar language's implementation of the FizzBuzz program!
After Dylan's talk was over, it was time for the first break of the day for further refreshments. After finding my way back to the coffee station, I replenished my coffee cup and consulted the map and schedule that was helpfully printed on the inside of the lanyard to find the room for my next talk. This next talk was to be Zac Braddy's All the mistakes I've made trying to implement microservices.
Zac first introduces himself and says that he's a lead developer for a company called Koodoo. He says that we'll first look at what microservices are, and so we start off by defining microservices. Essentially, they're small applications with small scopes of functionality. Importantly, they control their own data and don't share any data or state with any other microservices or other parts of the complete software system, except via the clearly defined contract of the API.
We then start to look at the microservices primer, which is a diagram that explains how we should split our application's concerns into various services. This diagram is a few years old now and Zac states that, although it seemed good at the time it was created, the microservices primer is now really considered quite monolithic and it's very difficult to actually replace any specific "layer" within the diagram.
Zac continues by mentioning some of the places where he used to work previously and how he tried to introduce microservices to those organisations. He says that one company he worked for had a large monolith that was burdened with a lot of technical debt. He suggested that they should re-write the application and use a microservice architecture in order to remove the technical debt. This particular organisation had a culture of "moving fast" and so didn't want to slow down in order to to re-architect the application to use microservices. Zac thought he'd prove the value of microservices by simply showing the rest of the team some real world microservice code that he'd written on his own.
Zac says that this approach is that of the "lone wolf". It's dangerous and it's a lonely place to be. It's dangerous as you're effectively saying to the rest of the team that you're better than them, and also because your employers have not necessarily told you to do this, so you're effectively robbing them of your time. Also, you can't build a complete microservices architected solution on your own, so you absolutely need buy-in from both your team and your organisation.
Being in the "right" team is one of the best ways of introducing microservices to your organisation, assuming that such a team exists. The "right" team here is one that is open to experimenting with new ideas but also one that has the backing of the organisation's management from a prior culture of delivering value. If such a team doesn't exist, you can try to build it yourself, but that can be hard. Zac told his organisation that they could have the best, but that it would take time. After 3 months he discovered that he was in a position of feeling "look how clever we are", but with nothing of real value at this point to show. After 6 months, the team realised they only really had the framework for microservices and hardly any functionality that would deliver real business value.
Zac realised that the mistake in this approach was that of "going dark" from the business whilst the team spent 6 months on their own trying to craft their microservice architecture. From this, Zac realised that it's important to not only create an MVP (Minimum Viable Product) but also a MVTP (Minimum Viable Technical Platform). We can know what a "perfect" system may look like, but it we won't have the time to implement it all when we're merely establishing our underlying framework, and it's important to remember that many things can wait until v2.
At a different company, Zac tried again. They started with big services of large scope and tried to move quickly, thinking it would expedite the process of introducing the microservice architecture and that they could break down these large services into much smaller services at a later time. This, unfortunately, didn't work. Large services end up as large services and it's almost impossible to break them down into smaller services later. Like human weight gain, it's easier to put on than to take off! Zac says it's not possible to "refactor towards microservices". If you don't start with microservices today, you don't get microservices tomorrow. You have to to start with them, start small, and iterate quickly.
At this point in the talk, the fire alarm in the venue started to sound. We'd been told at the introduction that there were no fire alarm tests planned for the day and so all attendees, speakers, helpers and everyone else had to leave the building and assemble opposite the venue at the designated assembly point. This was an unfortunate turn of events, having never personally experienced such an occurrence at any other DDD event previously, but these things can happen. Of course, what made the situation worse for most attendees was the rather miserable weather which was incredibly wet, so we all stood on the street, getting quite damp whilst we waited to see what would happen.
Eventually, after 10 minutes or so, the building had been checked and it was found that there wasn't a real fire so we were all allowed to re-enter the venue. We slowly headed back inside, which took a little while as we were funnelled through a single, small open doorway. I had just got inside the door after approximately half of the attendees when the fire alarm started to sound again! Oh no.. We all stood still for a little while not knowing if this was some residual noise from the previous alarm or not. A few of the venue's staff were in the hallway and were using their walkie-talkie's to co-ordinate the situation. Sure enough, this new fire alarm was a "real" alarm, and so we all had to turn around and back outside again! :( Our second stint of standing in the rain lasted a little longer as the venue staff had to re-check the building and try to find the cause of the fire alarms, which took some time. The rain continued to pour down and we got more and more wet as we stood waiting. Eventually, we were told we could wait in a different building further down the road in order to keep dry. We all made our way to the new building which was the library associated with the University conference centre. Despite us all being very wet by this point, it was a welcome relief to get inside out of the rain.
We didn't have to wait too long before word came through to us that we could finally return to the conference centre. We all made our way back and retook our places as we'd left them before the first fire alarm, however, by this point we'd lost approximately 45 minutes. Although it initially appeared as though we could carry on where we'd left off, the impact to the schedule meant that some changes had to be made. As a result, Zac's session (and all the other session that were running concurrently to Zac's) had to be cut short, although all these speakers were told they would have a guaranteed slot at next year's event - a nice touch from the organisers who had to deal with such an unexpected and impossible-to-plan-for chain of events on the day. Some of the subsequent sessions were also cut short by a few minutes in order to try to get the schedule back on track throughout the day.
It was now time for the next session, but Zac was able to quickly tell us that his slides are available online. I moved from the main room to a different lecture theatre in order to catch the next session of the day. This one was to be Robin Ninan's Ditching the test pyramid in the microservices era.
Robin's talk had been shortened from the scheduled hour due to the previous fire alarms, so Robin quickly started by saying that his talk was not to give an end-to-end solution for how to test microservices but rather to give a number of ideas and differing approaches that can be used as part of a microservice testing strategy.
Robin starts by introducing himself and says that he currently works as a security engineer, but also has a background in general software development. We take a quick look at the agenda of the talk which will determine that testing is indeed a useful activity and we'll look at the hardships to be found in testing microservices and then examine some approaches to make testing of microservices easier.
We start by agreeing that testing is useful. Even having only a single test gives confidence in the software to some degree and having more tests can reduce the chance of failure of systems in production. Importantly, testing also gives improved clarity over business requirements, especially tests that are behavioural based.
When Robin first started testing microservice, there were lots of initial hardships. These stemmed from a lack of experience and also a lack of good documentation and content in this area. These became a large barrier to entry. There's also a lack of specialized microservice testing tools. Robin says that the risk profile for microservices is uncharted and is highly dependent upon the overall architecture of the system.
Robin shows us the old ways in which we used to test. These were mostly used on monolithic systems and whilst the patterns can be applied to microservices, it doesn't really fit well as there's some fundamental differences. For example, what constitutes a "unit" test in a microservice? Is a microservice a "unit"? Maybe it is, but then the tests won't test service interaction. The same applies to integration tests. That could mean testing multiple components interacting together within a single microservice or it could mean testing multiple microservices and the interactions between those. Either way, trying to apply the old testing approach to microservices doesn't really work and its definitions are unclear.
Robin proceeds to clarify his approach to testing microservices that improves upon the older testing methods. He makes a clear distinction between an "integration" test and an "integrated" test. He says that an "integration" test should be a test that exercises only a single service. If that service needs to call out to other services during the functionality that is being tests, this call to the outside world should be mocked. An "integrated" test is a test that exercises the interaction between multiple microservices. There's also "implementation detail" tests that are more akin to traditional unit tests and test specific functions inside of a single microservice.
We look at a blog post by J.B. Rainsberger entitled "Integrated tests are a scam". Robin disagrees with the assertion from this blog post and cites, amongst other things, the technique of "poka yoke" which is a term that means "mistake proofing" and is used extensively at Toyota. He cites how Toyota, when building cars, will extensively test individual components of the car, but will also extensively test the integration of those same components too.
Finally, Robin finishes up by stating that there's no one single solution to testing microservices. When you first start your journey doing this, you'll need to start with the traditional test and evolve them over time to something more specific to microservices, but also specific to your architecture. Unfortunately, there's currently no shortcut for this discovery process.
After Robin's talk, it was time for lunch. Unlike in other DDD events, lunch at DDD East Midlands was no brown bag. They'd arranged for a cooked, hot meal to be provided for all attendees. This was a very nice touch and some hot food was very welcome, especially given the cold, wet weather outside.
There was a selection of different meal options, with a couple of different meat-based dishes (I think lamb and chicken were the options) and also a vegetarian option. All meals were accompanied by rice or potatoes and a selection of vegetables and salads. Lovely!
There was also delicious selection of cakes for dessert. Considering DDD East Midlands is a free event for attendees, having such excellent lunch options was a very pleasant surprise.
After enjoying the delicious lunchtime food and grabbing myself another cup of coffee to keep myself suitably caffeinated, I wandered around the upstairs area where the sponsors all had their stands. After chatting with a number of people manning the stands, I returned to the downstairs area and bumped into some friends. After further chats, it was soon time for the afternoon's sessions to begin. I consulted my lanyard again to find the correct room for the first session of the afternoon. This one was Ian Johnson's Reasonable Software.
Ian starts with the agenda for his session. We're going to define and look at what it means to be "reasonable". Ian says that reasonable software extends far beyond reasonable code. We can adopt tools to help develop reasonable code, but reasonable software requires that we be a reasonable person.
We realise that almost anything can be reasoned about given enough time, effort and energy, however, only justifiably complex things should require such effort. All else is likely accidently complexity, which we should seek to reduce. So, we should start with reasonable code. This is clean code, written as small, simpler components with less coupling. These are more consistent and predictable with clear responsibilities. Reasonable code is a great start, but we can go further than the code to ultimately have reasonable software.
Ian says that this comes from being a more reasonable human being. In your company, in your team, in the wider real-world. One effect of this is reasonable user interaction which ultimately is down to having empathy for your users. Being reasonable as a human being means asking for help as well as helping others. This frequently feeds others and helps them to be more empathetic and helps others in turn.
Inside of our software development teams, we need to have a vision. If you're a team leader, it's necessarily your job to provide the vision, but is your job to help the whole team form the vision and then adhere to it. Ian says that, despite the convention wisdom that might suggest the opposite, do repeat yourself. This means not optimizing or generalizing our code too soon, wait until at least the third or fourth time of writing the same code before generalizing. Make sure that the code you're generalizing is indeed duplication - often what appears to be the same code isn't actually quite the same. Also beware of over-generalization and implementing more and more abstractions as this can frequently introduce accidental complexity - something to be avoided.
Ian talks about the SOLID principles and says that they can be useful, but we should be careful not to overuse them. For example, one of the SOLID principles is the interface segregation principle which says that we should keep our interfaces small so that consumers don't need to implement methods of an interface that they may not need, but there's no point extracting multiple interfaces if this doesn't "do" or add anything to the final solution. The same applies to most of the other SOLID principles and design patterns. Beware of overuse and realise that, often, less is more. Ian specifically calls out the Command Query Separation technique and says it's a great way to help reason and structure code by separating methods into commands - those that mutate state but don't return a value - and queries - those that return a value but don't mutate state.
Ian shares a pithy quote. "Write your code so that it's easy to delete". This means that it should be simple to delete some section of your code and replace it with a new implementation. This implies that coupling in the code is kept to a minimum and that dependencies, if any, and clearly defined and managed. Ian then talks about Connascence. Connascence is a way to define various levels and types of coupling inside of our code. Ian says that he doesn't have time to go through all the different types of Connascence - there's nine specific types - but he does call out Connascence of Identity which is the strongest form of coupling and essentially relates to having a dependency on some kind of "singleton" object. Ian relates the story of how Netgear built some of their routers and hard-coded them to use the University of Wisconsin's internet time server, effectively creating a DDoS attack on the internet time server.
We look back at reasonable code and see that it all starts with naming. Naming things is hard, but good names for your variables, methods and classes can make all the difference to your code's readability to others. When you're coding, it's important to keep your focus on your current layer of abstraction and try not to let ideas and concepts from other layers bleed into this one. Ask yourself, can all of the code for this layer fit inside you head? Trust your APIs. Trust the API's that we consume, but also ensure that in our own API's that we expose to others we don't change them in a way that can break consumers. Use a good non-breaking versioning strategy and have a concise public surface area. Develop client-side libraries that are idiomatic in their respective languages, make sure your exposed API is well documented and understand the audience that your API is aimed at.
Finally, Ian talks about class design. We should take efforts to good names, but also to ensure that our classes are designed to have only a small number of methods. Each line of code is important and Ian shows an example of some code that threw him off when he first looked at it. It was only a single line:
if(l1 > l2) {...}
Here, we're comparing two variables named l1
and l2
, but in certain fonts, those same variable names can look like the numeric values of 11
and 12
respectively. Readers of this code can be forced to think too hard about it and Ian states that it's important to not make your readers have to think too hard about your code. Similarly, whilst it's possible, in a language like C#, to string together multiple concepts on a single line:
var result = foo != bar ? foo != baz ? bar != baz : baz : bar;
However, this is confusing and very difficult to read. Although it's more verbose to split such a line as above into multiple lines, it makes it far easier to see, at a glance, what the code is actually doing.
After Ian's talk was over it was time for another refreshment break during which more coffee, biscuits and some rather nice muffins and cakes were available.
After a suitable amount of re-caffeination and some delicious pastry treats, it was time for the next session of the afternoon. This one was to be Ian Cooper's How to escape the distributed monolith.
Ian starts with the agenda for his talk. We'll look at how we might move from a monolith to microservices, then we'll compare it to moving from a monolith to a distributed monolith. We then look how containers can often be seen as coming to the rescue, but sometimes really don't help all that much and finally, we'll look at how to move to an event-driven architecture to truly decouple our services.
Ian states that software is largely build by teams. If we're building a monolith, having individual teams working on various parts of the monolith can end up with each team getting in each other's way - especially when deploying code. We can't really do continuous delivery of a monolith so we end up only deploying our software a small number of times per year. Perhaps only 2 or 3 times. From here, we might consider splitting our monolith into microservices. This means that we can have many different teams, each working on their own area of the overall solution but where their area is it's own encapsulated piece of software with it's own database, own product backlog and which can be deployed independently. These different pieces of software can communicate with each other via either direct request/response communication or, better still, via events published to a message bus.
If we move our monolith towards microservices, it's important to create stability when doing this. When one service depends on another, we need stable boundaries and stable contracts of API's. One way of slicing our application into constituent parts and defining the boundaries is to do so based upon "business capabilities". Business Capabilities are things that a business does that provides value for its customers. These are usually expressed via a noun (or sometimes a noun and a verb) and always have an outcome. Business Capabilities tend to have strong alignment with business processes which found via value-chain mapping.
There are a number of different levels of business capabilities and Ian shows a slide that describes these levels:
Ian states that it's at the Level 3 abstraction where microservices should be created.
We talk about the concept of Bounded Contexts, which is a Domain-Driven Design term, and examine how microservices don't necessarily align with bounded contexts. We can define our bounded contexts with business capabilities but many business capabilities can exist inside a single bounded context, thus we'll require multiple microservices in that same bounded context.
Ian moves on to look at how we can break apart a large monolithic application. He says we shouldn't try to re-write it from scratch. That monolith probably contains code that deals with many edge cases and they're there for a reason. One effective way to split a monolithic application is the "cut & shunt" pattern. To apply this, we "cut" the required functionality from the monolith by refactoring it out into an independent service, extracting that functionality from the monolith into the service. We keep both the monolith and the extracted service running together. Of course, this does imply that it's possible to extract such functionality in the first place, so it's better if the monolith does indeed have clean enough code to facilitate this. If your monolith code is fairly clean, it's possible to repeat this process until there's very little left of the monolith at all and all required functionality has been extracted to individual services.
When we've split some of our monolith's functionality into microservice, we'll need to communicate between those services to perform some specific action. Ian uses a Hotel system as an example application throughout his talk and suggests that such an action within this example might be the act of creating a reservation. This would likely require a booking service, a housekeeping service and a payment service at least. Such communication is usually first done with a request-driven architecture where each service makes a direct request to the API endpoint of the other services in order to call behaviours on that service.
We can see that Request-Driven architectures are synchronous calls, so we've effectively introduced temporal coupling into our application. This means that the recipient service needs to be available at the time we make a call to it and needs to be responsive to all callers. However, we can improve the communication between our services by changing Request-Driven architecture to an Event-Driven architecture. This is asynchronous communication and doesn't require that the recipient service necessarily be online and able to process and respond to our message immediately. This removes temporal coupling, but does require that we're able to publish messages to a service bus or message queue. This allows us to move towards a core microservice principles of "smart endpoints and dumb pipes". Ian also mentions the concept of a "service mesh". This is a term that's used quite a lot these days and Ian says that this simplest way to explain a service mesh is to think of them as a series of proxies that exist within our application architecture that serve to make application services more available. Service meshes are often used as a way to address behavioural coupling within our system. Behavioural coupling is when one service relies on the behaviour of another service, but becomes problematic when the called service decides that it actually needs to split itself down into a number of smaller services. Upstream services, that previously had a dependence on only one service, now need to depend on multiple other service and this change needs to be communicated in advance.
We move on to look at at reference data. This is data that may be required by one service in order to fulfil some specific request, but where that data is not owned by the service, for example a payment service that requires some customer information - owned by a customer service - in order to process a payment. The payment service can't simply request data from the customer service at the time it needs it as we're then back to Request-Driven architecture and we've re-introduced temporal coupling back into our system. There's two options to address this. The first to to ensure that all messages that the payment service receives will contain the requisite customer data required to complete the payment process. The second is make all required customer data available to the payment service in advance. This is often done using something known as ECST or Event-Carried State Transfer. There's also ATOM feeds that can be published and cached by recipient services. In discussing reference data, Ian talks about the concept of "data on the inside versus data on the outside". This stems from an academic paper by Pat Helland and essentially states that certain data is private to the inside of the microservice and so anything outside should not take any kind of dependency on it as it's subject to change. Data on the outside are the events and messages published by the microservice, and it's fine to depend upon that as it's likely to remain stable for a long time.
We look back at our monolith and say that, once we've extracted some services from the monolith, we can add events to the remaining monolith code in order to communicate with our extracted services. Finally, we look at the concept of client-side composition. This is the notion that we can have our client - perhaps a mobile application or a web application - and have those applications compose together multiple calls to the service API's, collating the returned data into some usable set. This approach is not so good for mobile client as they can often be "occasionally connected" and so it's far better to use server-side composition with an API gateway. This means the client application makes only a single call to the API gateway which then performs the necessary individual API calls to the various services, composes the returned data and returns that single response to the client application.
After Ian's session was over, it was time for a final refreshment break before the last session of the day. This one was to be Neil O'Connor's CTO Secrets: How to get the best companies fighting to hire you.
Neil first introduces himself. He's the CTO of a company called Koodoo. He says that his talk will not help you get a job that you're not qualified for, but that he intends to give you some tips about how best to structure your CV, how to ace interviews and more. Neil says that he has first hand knowledge of the interview process, as despite being a CTO, he still insists on performing the first interview for all technical hires at his company.
We look at the title of the talk and Neil ponders on what it means for the "best" companies to be fighting to hire you. He says that it's not necessarily about the highest paying companies or the biggest brand names, but rather companies that place real value on the software development process and thus, in turn, place significant value on software developers themselves. Unsurprisingly, we find that most of these companies are themselves software companies as they tend to value great software engineering the most. It's often been said that the closer your role is to how a company actually makes money, the better off you're likely to be valued.
Neil states that software, as a profession, as come a very long way in a short space of time. It's an incredibly important profession today as software is everywhere, from our computers, mobile phone, right through to autonomous vehicles, household appliances and much more! As a result, it's important for those of us working within the profession to take pride in being professional.
Neil says its important to keep your skills sharp. You can do this by having what he calls a "side hustle". This could be a side project or open source project that you contribute to, but it doesn't have to be code. It can be any related activity such as public speaking, mentoring, organising meetups etc. but it must be something that you're really invested in. It's also important to be a part of your local community, both the physical community in the real-world - such as attending local meetups - as well as the community for your chosen platform/language online. Don't just consume open source software or Stack Overflow answers, always seek to give back to the same communities that you benefit from. Keep learning and continue to practice your core skills. Make technical reading a regular thing and practice skills by contributing to open source and practice websites, such as CodeWars and Exercism.
It's important to understand what Neil calls "adjacent practices". Coding is important, but it's far better to have a well-rounded understanding of what it takes to create a well-engineered solution along with the understanding that well engineered solutions involve far more than just code. We should also seek and also give help at each opportunity. Find someone you respect and ask for their advice. Find ways to give advice to others. Also, you should ensure that you develop a point of view. This means having a reasonable and informed opinion on a variety of different topics. Is functional programming the future? Is Blockchain the future? Microservices or monolith?
Neil moves on to discuss getting an interview. When reading CV's, people will make snap judgements. The best companies rarely advertise in public as they don't need the additional work of dealing with an excessive amount of CV's from unqualified candidates, so they mostly hire from closed communities and especially from staff referrals. This is all the more reason to join your local community and network with your peers. Neil states that, for him, recruitment agencies are not the enemy and we should treat interactions with the recruitment agency almost as a first interview. Your CV is your shop window. Remember that, many times less is more here. The best feedback from CV's usually goes to the candidates with the shortest CV's, ideally CV's that are succinct and contain all required information on a single page. Think about how you might "humanise" your CV. This means occasionally using slightly less formal words to describe your experiences, giving a more unique and human touch. Remember that your CV won't win you the job but might lose it for you. When documenting experience in various roles, ensure you focus on your individual contribution and showcase your skills. Neil says that, despite some advice to the contrary, it's fine to add your photo on your CV. Finally, be imaginative in your CV's presentation. Neil recalls how one CV from a candidate was laid out like a Twitter conversation, which made that CV both unique and memorable.
Neil talks about how he reads a CV and says how he first considers the overall impression. Is the CV correct (e.g. no spelling mistakes) and presented in a tidy manner? He considers the candidates educational depth, and if it's lacking, is there sufficient job history and other experience to replace it? Importantly, for the job history, Neil looks to see how you have improved within each role. This could be your own personal self-improvement, or improvement or the role or the company itself. He says that, mostly, he's looking for signs that you're motivated by the right things - a desire to learn, help and deliver value rather than being purely motivated by money, power or status. He says that even non-work related experience can be a huge benefit to your CV. If you've automated your toaster with a Raspberry Pi that's very interesting! Feel free to add such information onto your CV, but always make sure that your side-projects do indeed back up your claims.
Neil moves on to talk about how you can really "ace" the interview once you've got it. Firstly, it's important to do your research on the company, at least to have a basic understanding of what the company does and where it fits within the company's industry. With regard to the actual interview questions themselves, Neil says that there's a good likelihood that you'll be confronted with behavioural style questions. These are questions that ask you about specific situations from past employment and try to gauge how you behaved in those scenarios. You can turn such behavioural questions to your advantage, however, as they provide you with a perfect opportunity to focus on your achievements and give you the chance to talk about the things that you want to talk about during the interview. Make sure you know your own CV, and be sure to make some notes on your last few projects. The act of actually writing this down forces you to structure your thoughts and will help you provide good answers should questions about them come up. One important point that Neil stresses is to ensure that you can draw, on a whiteboard, the high-level architecture of your last project within approximately 60 seconds. He says that many candidates simply cannot do this, so ensuring you're able to perform this task would automatically put you ahead of others. Also think about things you'd have done differently on the project. All of this shows you can think about and understand the bigger picture, not just the detail.
If you're interviewing for a senior position, focus on how you've mentored more junior team members and examine different perspectives on the key issues. Also, beware of "booby trap" questions. Questions on subjects such as technical debt can be a very thorny issue due to highly subjective opinions on such a topic. Finally, Neil mentions technical tests. They're frequently a big part of interviews these days. If you're expected to sit an on-site test, always ask what "good" or "success" looks like to the interviewer for the test. This is important so that you can focus on the things that matter to the interviewer as you're likely to be under significant time pressure. If you're asked to take a test in your own time, perhaps over a weekend, make sure you take your time and ensure that the code you submit shows a level of care and appreciation for what might be considered good, production-quality code. If you're asked to sit an "observed" test, where the interviewer will literally stand next to you as you type, explain your reasoning as you write the code and use the opportunity to talk about alternative approaches that could be taken, even if you adopt a different approach with the code. And if you're ever performing a "whiteboard" test and the interviewer asks you if you're happy with your attempt, it usually means that the interviewer is not happy with it!
After Neil's session, it was time for the final wrap-up of the day. All the attendees headed back to the main hall for the final few words from the organisers.
The sponsors were thanked again. After all, without the sponsors there would be no event, and the attendees were also thanked for sticking around and getting rather damp during the two unexpected fire alarms. After everyone was thanked, it was time for some prize draws. There had been a prize running throughout the day for the best sketch notes that were posted to twitter and also a prize for some JetBrains licenses awarded to some people who posted images from the conference with specific hashtags. When the winners of the JetBrains licences were announced, I was shocked to see that my name was up there on the slide! So many thanks to JetBrains and DDD East Midlands for allowing me to renew my yearly ReSharper Ultimate subscription at no cost! Awesome!
After the prize draws, it was time to wrap up. The first DDD East Midlands had been an amazing event, despite the back-to-back fire alarms which were both entirely unpredictable and entirely unable to be prepared for in advance. The organisers had done a fantastic job introducing what looks to be one of the best new events in the DDD roster, and also handled those pesky fire alarms like champions! DDD East Midlands is going to be back again, around the same time in 2020 and I, for one, can't wait!