DDD Southwest 2019 In Review
This past Saturday, 27th April 2019, the 9th annual DDD South West conference took place. This was my 4th DDD South West conference overall and was another fantastic DDD event.
As usual, with the conference being in Bristol, I'd travelled down from the north west on the Friday evening and stayed over in a local hotel before travelling the short distance to conference venue on the Saturday morning. Having slightly overslept, I didn't have have time to source breakfast, but was more than happy when arriving at the conference to find the usual selection of pastries and coffee for the arriving attendees.
After consuming a few cups of coffee and enjoying a couple of Danish pastries for breakfast, it was soon time for the attendees to gather around the main projector screen in the communal area for the introduction to the day's proceedings. After the brief introductions were over, it was time to head off to the first session of the day. This was to be Ben Arroyo's Nailing down distributed workflows with microservices.
Ben starts by introducing himself and says that for the past 9 years he's been an independent consultant. He says his talk will primarily be about the architecture and use in practice of microservices, not the basics of creating microservices themselves. The agenda for the talk will cover workflow concepts and patterns including the use of Domain-Driven Design (DDD), will move onto designing the workflow architecture and we'll ten build an implementation and perform a release of the complete workflow system.
We start by asking "What is a workflow?" It's an "orchestrated and repeatable pattern of business activity" and it's also a "representation of real work" within the business. We then ask "What is a distributed system?" It's different networked computers that communicate and coordinate by passing messages to achieve a common goal. We talk specifically about microservice architectures and that they're a specialisation of Service-Oriented Architecture (SOA). Ben says the microservice architecture that we'll define will follow Domain-Driven Design patterns and will utilise domain events, bounded contexts and the passing of commands.
We'll also utilise the Saga pattern, which is a long-running transaction, usually consisting of multiple stages, which avoids locks but which lacks the ability to rollback to prior state in the event of errors. Such errors must be handled within compensating transactions. Sagas must also be co-ordinated. They must be coordinated which can be done with choreography which has no central coordination, or orchestration which is a service in charge of sequencing the business logic. Choreography is useful for simple sagas with few steps, but more complicated sagas with more steps should use an orchestrator. Orchestrators do represent a single point of failure, though.
Ben talks about the business domain that we'll be modelling to create our microservice-based workflow. It's called "Chicken Power" and is an energy provider specialising in renewable energy. We buy energy from independent generators and re-sell it to business customers. Chicken Power's sales workflow is as follows. First, we register a new client in CRM, then we generate a draft contract with our proposal and terms and conditions, then we calculate a price for the energy for that customer who then either accepts or rejects our offer. If the offer is accepted, the contract is registered in Chicken Power's portfolio. Ben then shows us the architecture diagram that represents our workflow:
We talk about our tech stack for the implementation. We're using C# and .NET Core for our application code. For our messaging channel, we'll use MassTransit as a service bus on top of RabbitMQ as a message queue. We'll also use MassTransit.Automatonymous as a state machine library which will implement the stages of our Saga. We'll also use the Quartz.NET library for scheduling of messages along with other libraries including AutoFac for dependency injection, SeriLog and Seq for logging, and with the entire application containerized in Docker containers.
Ben now open up Visual Studio to show u the code of the solution. All of the code is available in Ben's repository on GitHub where it's called SheepPower rather than ChickenPower. Ben explains the naming anomaly as being due to his colleague who originally named the project SheepPower as he's from Wales.
Although we're using a single Visual Studio solution to hold multiple projects, each one representing a microservice, this is something that you wouldn't do in the real world. Each service would be an entirely separate solution. Ben explains this is only being using for simplicity for demo purposes.
We examine the various projects in the solution. We have a Common
assembly that contains Logging infrastructure, we have a Messaging
assembly which contains the MassTransit infrastructure and also the interfaces for our commands and events. There's Persistence
and StateSaga
assemblies that deal with the implementation of the Saga, with the StateSaga
assembly containing the concrete implementations of the commands and events as well as the actual implementation of the state machine itself that deals with transitions between states, or stages, or the saga. There's a BackendForFrontend
project that's an MVC project which implements our API that faces the outside world and finally there's PricingProxy
and ContractGeneratorProxy
projects which are the consumers, or handlers, for the relevant commands generated within the domain.
Finally, there's a number of other configuration files which include Dockerfile
's for each of the projects and a docker-compose
file allowing for the entire solution to be composed and spun up as 6 different containers (4 for the project itself, 1 for RabbitMQ and 1 for Seq logging).
Ben run docker-compose up
and gets his application running in the various containers on his local machine. We use Postman to invoke the HTTP API endpoint and watch the Seq logging via its web interface as the various parts of the workflow produce their logs showing how the messages are flowing through the system.
Ben highlights the Price Calculation step of the workflow. This is started by hitting the price request API endpoint. This starts the pricing service to calculate the price, but also schedules a future event - to fire after 20 seconds - that will expire the calculated price if a further HTTP API request that either accepts or rejects the price quote is not received. If the price expired event fires, we see how the state machine resets the workflow back to the Proposal Created state (i.e. immediately prior to the price being generated).
Since this was such as code and demo heavy session, Ben suggests to download the code from his Github repository and explore further.
After Ben's session was over, it was time to gather back in the main communal area for a quick coffee break, and after some liquid refreshment it was time to head back to the various lecture rooms for the next session. For me, this was Steve Gordon's Turbocharged: Writing High Performance .NET code
Steve starts his talk by examining aspects of performance. There's Execution Time, Throughput and also Memory Allocation. We also look at how performance is always contextual and whilst we're reminded of Donald Knuth's famous quotation that "Premature optimisation is the root of all evil", if we start out building an application that we absolutely know will require very high performance as one of it's required features, it's beneficial to consider optimising for performance right at the beginning.
We move on to examine the optimisation cycle. This starts with measuring and for this, you can use a profiling tool such as Benchmark.NET. Once measurements are acquired, we can then start to optimize. It's important to keep optimisations small and re-measure after each code change to prove the optimisation has worked. Large changes can obfuscate the perceived improvement. We can also use Visual Studio Diagnostics tools during debugging along with Visual Studio Profiling tools to capture such measurements. Other good tools are PerfView which is a powerful tool but can be somewhat cumbersome to use. There's also the JetBrains tools of dotTrace, dotMemory and dotPeek.
Steve talks specifically about Benchmark.NET, as it's a very good tool allowing for scientific measurements of code performance. It's also open source. Benchmark.NET can profile and measure both entire methods as well as individual lines of code. It can compare performance benchmarks across architectures (i.e. 32 or 64bit), across different .NET frameworks and various even various stages of JIT compilation as well as being able to show heuristics around potential garbage collections at each generation. Benchmark.NET also contains a handy .Consume
extension method that can be added to LINQ expressions to ensure that lazily-evaluated expressions aren't just created but are actually but are actually invoked when testing for performance of LINQ expressions.
Next, we start to look at some specific changes we can make to our code using some of the new features in C# 7.2, specifically the Span<T>
and ReadOnlySpan<T>
types. Span<T>
's most common operation is .Slice
to return a subset of an the underlying memory, most commonly used with arrays. It's is a stack-only type so no heap allocations occur and slicing a span is an O(1) operation, irrespective of slice size. Steve suggests that using ReadOnlySpan<T>
over strings is a very good, performant alternative to using .Substring
on the string type as that always returns a new copy of a new string whereas ReadOnlySpan<T>
does not. We look at a big caveat of using Span<T>
and that's the fact that when declared in the scope of a method, it can't be used as a return value from the method to the calling code. This is due to it being allocated on the stack, not the heap, and the stack frame containing the Span<T>
instance goes out of scope when the method returns. One way around this, is to use another new data type, created to address this shortcoming. This one is the Memory<T>
data type, which is almost the same as the Span<T>
type, but is allocated on the heap rather than the stack. For this reason, it's slightly slower than Span<T>
but does allow a method scoped variable declared as type Memory<T>
to be returned from a method. If you want to regain some of the performance benefits, then Memory<T>
can be easily converted to a Span<T>
type.
Steve shows us some sample code that starts off not using any of the new Span<T>
or Memory<T>
data types and shows us that code as he evolves it with performance optimisations over time. The code creates a string filename based upon specific business requirements. From working directly with an array to changing to use Span<T>
, he saw massive improvements. This code is taken from some real-world code used by Steve in his day job, and this specific piece of code processes 18 million messages per day, and over this load, moving to Span<T>
saved 17GB of memory allocations and saved over 2700 garbage collections.
Next we look at the ArrayPool
type. This allows creating a "pool" of arrays for re-use, similar to how databases will pool their connections for reuse. This allows "renting" some array space, which must be "returned" to the pool after use. We examine how this technique is actually less performant with small array sizes (i.e. 20 elements) than simply declaring a new array, but for large arrays (i.e. 10000+ elements) renting and returning the array with ArrayPool
is much more performant. Next, we look at System.IO.Pipelines
. This type has a PipeWriter
and PipeReader
to write and read over an input/output stream. Steve explains that it's around twice as performant as using streams. We look at a quick demo of reading a CSV file from an AWS S3 bucket, decompressing it and then parsing it. The first attempt reads the entire file into memory in order to perform the decompression and parsing. Then we take a look at the optimised code which uses pipelines to parse the CSV file without loading it entirely into memory. Benchmark results from Benchmark.NET are initially strange for memory allocation showing far higher values than expected, but also showing improved GC results. Steve ponders why this could be. We learn that this is due to the fact that when using Benchmark.NET with .NET Core it can't accurately determine allocated memory across all threads, for example it the case of benchmarking lots of async/await code. Steve suggests that to get a more accurate picture of memory allocation across threads, use a different tool such as JetBrains dotMemory.
We move on to look at the new JSON API's coming in .NET Core 3.0. There's Utf8JsonReader
and Utf8JsonWriter
, which are low level. There's JsonDocument
which is a mid-level API and finally, there's the higher-level API's of JsonSerializer
and JsonDeseralizer
. They're all in the System.Text.Json
namespace. The lower level API's require lots of boilerplate code if you're going to work directly with them, but they offer the most flexibility and power. Conversely, the high level API's will handle the common scenario of serializing and deserializing types directly to and from JSON. Whilst these new JSON API's do directly compete with JSON.NET, they're highly optimised to use newer constructs under the hood, and thus offer improved performance over JSON.NET, which is optimised only for the general use-case and uses older memory related code under the hood.
Finally, Steve shares some benchmarks and statistics for his own code the he's improved and optimised for his employer. A single microservice within the business processes 18 million messages per day. Using the above techniques, they saved 50% in memory allocations. This meant that they could run less instances of the microservice, meaning they required one less virtual machine to run the instances to process the same number of messages. This one less virtual machine was able to save the company approximately $1700 per year. If that saving was to be multiplied across many other microservices, it's easy to see how such reasonably small code optimisations can add up to a large yearly financial cost saving. Steve shares a reference to a great book, Pro .NET Memory Management by Konrad Kokosa, that he's been reading recently and states how it's a great book if you're serious about improving the performance of your .NET code.
After Steve's talk, it was time for another coffee break in the communal room. This one was accompanied by some rather tasty yoghurt fruit cereal bars and other confectionary.
After the coffee break, it was time to head back to the lecture theatres for the next session. This one was Joel Hammond-Turner's You're The Tech Lead, You Fix It
Joel starts his talk by stating that this is his third talk relating to the life of a tech. He's a tech lead himself at a company called Landmark and he works on the Landmark Valuation Hub product which processes almost every mortgage application in the UK. Joel says how the product is large and complex and that over it's lifetime, it's had anything up to six different teams working on it, some being offshore.
Joel talks about Product Managers. Most of the time they are good at batting away unreasonable demands from customers, but occasionally, they can be tempted to say "So here's your PBI's (Product Backlog Items) for this sprint" to the development team. It's an easy trap to fall into, and it can then be up to the tech lead to address this. If unaddressed, this can lead to scoping of workload before estimation of that same work by the development team, scope-driven delivery and PM-driven breakdowns, all of which are bad. The key to addressing this is to breakdown PBI's into constituent tasks to show the Product Manager what's actually involved. This must always be done by the tech lead along with the rest of the team members and should ideally be done before the sprint that will contain the work being broken down. Each PBI must be deployable, testable and an able-to-be-integrated unit of work. This is the Landmark "definition of done". Integration can be quite complex, so that must be explicitly broken down as a task and included in the PBI.
Joel talks about when Product Managers (PM) or Business Analysts (BA) ask the question, "So, how long is it really going to take?". The answer to this comes down to your estimates. How good are they really? They must include accurate estimations for development, testing, integration and deployment. Estimations only improve in accuracy with many iterations of work and the experience gained therein. IT's important to understand that estimations can only be accurate for "known knowns". There does exist "Known unknowns" which can things such as knowing that your work must be integrated with work from another team, but that it's impossible to know exactly how long that will take. There are also "Unknown unknowns". These could be major system outages, recovery from which can't possibly be estimated but it's important to appreciate that these can easily occur.
Joel ponders another question that can come from the PM's or BA's. "Why bother about this 'Technical Debt' then?" If you're the tech lead, technical debt is your responsibility alone. You must factor in time to address technical debt when it makes sense. Addressing it is about balancing YAGNI (You ain't gonna need it) against cohesion of the debt-laden code. Even if not addressing the technical debt immediately, it should be added to the product backlog immediately to ensure it's not forgotten. Also be wary of only doing "just enough" to address some immediate debt as sometimes a bigger refactor is required and should be undertaken to prevent further accrual of more debt.
We then look at what to do when issues arise in production. As a tech lead, you can't be defensive and say "it can't be our fault", you must perform a thorough diagnosis of the problem before you can even know if it's your fault or not. Then you can work on fixing if it is your fault. Logging is often all you have for diagnosis, so log everywhere. Always ensure to include a correlation ID in log files to unpick logs for a specific process from an interleaved raw log file. When problems are your fault, legacy code can be a minefield. Beware of custom implementations of framework features as these are a big red flag for potential problems. Always avoid using hard-coded paths, and configuration too as this is a source of many issues.
Next, we talk about about teams and offshored developers. It's important to be careful of "silos" whereby physically remote teams or team members lose touch and communication with other teams or team members, which can easily occur. To prevent this, one of the tech leads (as there may actually be more than one if multiple teams are collaborating) should dictate the approach of development. This can often be unpleasant, but it's very often necessary to ensure effective collaboration and communication between teams and members. As a tech lead, you should set the standards for the team. This includes enforcing consistency on code via coding standards. Create checklists for developer behaviours, code reviews and pull requests. This consistency leads to resiliency. There's lot of good tooling that can automate and enforce such consistency such as StyleCop, FxCop, Sonar Qube and Snyk, which can also track dependencies and vulnerabilities. Also ensure you use code coverage tools to enforce inclusion of unit tests for new code. Finally, as a tech lead of a team, communication is essential. Have frequent meetings for code standards and examining ways of improving processes and approaches to code.
After Joel's talk was over, it was time for lunch. Lunch at DDD Southwest is always quite special as it's usually a nice hot pasty. This year was no different and there was the usual choice of traditional Cornish pasty or cheese and onion, along with some tasty nacho crisps and some fruit.
The lunch break this year was slightly shorter than last years break, which meant that there was slightly less time to eat our lunch before the grok talks started. Unfortunately, I missed the grok talks this year. The weather had not been the kindest to us this year, but there was a brief respite from the rain over the lunch break so I decided to take a quick walk around outside for some fresh air.
After a brief walk, I returned to the venue, quickly grabbed myself a cup of coffee and headed back upstairs to the lecture rooms to find the room for the first of the afternoon's talks. This one was to be Joseph Woodward's Improving System Resiliency via Chaos Engineering.
Joe says that the aim of his talk is to convince us all that failure is normal. We must build our systems for failure and understand that resiliency is required and goes far deeper than just implementing such things as circuit breaker patterns, for example. Joe works at Just Eat and he's been there for approximately 3 years. Given Just Eat's scale, resiliency is critical for them.
Joe starts by looking at some history. We originally had very monolithic applications, but with the advent and growth of the cloud, compute power got cheaper and it became ever easier to move towards a more microservice based architecture and away from the monoliths. As a system grows in size, it grows in complexity. Whilst microservices help to scale our applications, the move to microservices architecture does come for free. Microservice woes include architectural and operational complexity, data consistency, CAP theorem trade offs and the mental models of the application as a whole. Joe suggests reading about the Eight Fallacies of Distributed Computing, coined by L. Peter Deutsch, which gives gives us a good indication of the kind of problems we can encounter in a distributed system:
- The network is reliable
- Latency is zero
- Bandwidth is infinite
- The network is secure
- Topology doesn't change
- There is one administrator
- Transport cost is zero
- The network is homogeneous
Joe continues and looks at a paper called "How Complex Systems Fail", written by Dr. Richard Cook. Although originally written with reference to medical systems, it's become increasingly relevant for software engineering and software systems. Failure is the normal state of our system, and once we acknowledge that failure is normal and is the default state, we start to build systems very differently. It important to understand that by "system" here, we refer to the complete system within a domain and an intrinsic part of that "system" are human beings. Humans can be a part of, or even create the failures, and that is something that must also be addressed when considering system resiliency. Joe references the book, "The Field Guide to Understanding 'Human Error'" by Sidney Dekker. The books states that there's really no such thing as human error and that human error is something we ascribe to rational behaviour after the fact. In our own world of software engineering, no developer wants to break production, so when failures occur, it's important to appreciate the conditions and factors that lead to behaviours that might be termed "human error" but was actually entirely rational for the person performing the behaviour.
We examine the concept of resiliency further. First step is robustness, then after that, adaptive capacity. After this comes a system's ability to gracefully rebound from failures and unexpected states. Joe mentions the Chaos Monkey and Simian Army. These are products written by Netflix to help test their infrastructure by purposefully inducing failure. By testing their systems with failures, Netflix are able to ensure that resiliency is built into their systems right from the beginning and that all of their development follows the principles of chaos engineering. O'Reilly has a free e-book called "Chaos Engineering". The author states that the goal of Chaos Engineering isn't to simply find vulnerabilities, its about being able to deal with the vulnerabilities that will inevitably occur. Ultimately, it's about testing proactively instead of waiting for outages to occur.
So, why would we break things on purpose? Like a medical vaccine, we inject a small amount of "harm" in order to build future immunity. We identify the most vulnerable areas of our systems and can improve their resiliency. When running Chaos Engineering experiments, monitoring and the ability to observe the exact consequences of actions is essential. You need to tightly control the "blast radius" of the experiment to limit the possible damage that can be done and always ensure you have a comprehensive roll-back plan to correct the effects of the experiment. A major goal of a chaos engineering experiment is to prepare humans for failure. One such approach to this is the concept of "Game Days". Game Days are entire days where all teams in an organisation come together, one team turns something off, but doesn't tell the other teams, then the objective is to see what happens and how the other teams react. It's effectively like a fire alarm test but for multiple software teams. With an actual fire alarm, we're conditioned to know how to react and to leave the building in a safe manner. The objective of a Game Day test is to instil that same sense of intrinsic understanding of what is required to be done in certain failure scenarios.
It's also very important to understand just how reliable we need to be. Reliability comes at a cost, and there's a point of diminishing returns that you can pass given the financial investment. It's unlikely most of our systems need 5 or 6 9's of uptime, so a comprehensive cost benefit analysis is very important. Time to recover from failure is far more important than time between failures. It's fine for failures to occur frequently, but being able to respond quickly to failures is more important than minimizing the amount of failures. Being able to quickly and efficiently respond to failure is more proactive than reactive.
Finally, Joe states that virtually all of the big names in software are performing chaos engineering, such as Netflix, Amazon, Dropbox, Slack, Shopify and many others. It's only through the practice of chaos engineering that such companies can keep their software running smoothly and efficiently most of the time.
Joe shares some useful resources and links, such as a talk from Richard Cook, "How Complex Systems Fail", a talk by Adrian Cockcroft, "Dynamic Non-Events", a blog post entitled "Why do things go right?" and a GitHub awesome list for further Chaos Engineering resources.
After Joe's talk, it was time for the final coffee break of the day. Like lunch times at DDD Southwest, the afternoon coffee breaks are usually rather special and this year was no different. It was the return of the very popular cream teas.
Luckily, my previous session had ended a little early and so I found myself one of the first in the queue for the cream teas. I quickly helped myself to a lovely scone and headed over the bench with the coffee to grab yet another cup of coffee to keep me going throughout the final session of the afternoon. The other conference attendees soon arrived after their own sessions had ended and made a beeline for the table with the delicious cream teas.
After finishing our cream teas, it was time for the final session of the day. We slowly made our way back to the lecture rooms from the communal area for our final sessions. Mine was to be Mark Rendle's DiagnosticSourcery - Diagnostics, Metrics & Tracing in .NET
Mark starts his talk by asking why are diagnostics and metrics so important? Well, you can't improve what you don't measure. Mark looks at Metrics vs Logs and we see that these are not the same thing. Logs are simply records of something that has happened and they can be queried with simple text-based searches. Metrics are hard numbers that can have arithmetic applied to them to capture specific measurements from specific parts of the application.
So, what should we measure? Well, everything, but make sure that in implementing the capturing of such metrics, they can be controlled via configuration to turn them off. This is because the act of measuring itself has a small performance impact. Number of events, throughput, code execution timings, number of errors along with system data (CPU, memory, network utilisation etc.) and database or HTTP request timings are all very useful values to measure and to apply metrics to. Other great metrics to capture are start-up times (for native/desktop applications) and feature usage to see if users are actually using parts of your application.
Next we look at how to measure. Mark first mentions OpenAPM.io which is a website that shows you which open source Application Performance Management (APM) tools are suitable for your tech stack. It allows specifying a platform and language and shows available tools for APM on that platform. We look at what's available to us within .NET. ASP.NET Core has Microsoft.Extensions.Logging
which includes an ILogger
interface. This is a common abstraction for many logging frameworks, for example, NLog, SeriLog and many others all have sinks for the ILogger
interface. There's also DiagnosticSource
which is a common abstraction for diagnostics.
Mark suggests adding the following to each type within a given program in order to implement diagnostics for the type:
private static readonly DiagnosticSource _diagnostics = new DiagnosticListener(typeof(Program).Fullname);
Here, we're declaring a DiagnosticSource
and assigning a new DiagnosticListener
to it. DiagnosticSource
is an abstract class and DiagnosticListener
is the single concrete implementation of it within the framework. DiagnosticSource
also includes something called Activities. These allow tracking an activity over a number of steps in a process (i.e. within a distributed system). This works by assigning an internal correlation ID when an activity is started and can be achieved like so:
var activity = new Activity("DoThing");
_diagnostics.StartActivity(activity, args);
// Do stuff.
_diagnostics.StopActivity(activity, args);
For reading DiagnosticSource
events, we use a DiagnosticListenerObserver
. This uses the IObservable
interface and the Reactive framework to allow listeners to be informed when events occur. It's also capable of buffering events, so if the event emitter has been running for some time before the listener gets attached, all prior events are replayed from the beginning. DiagnosticSource
is also embedded inside .NET Core. This means that ASP.NET Core and Entity Framework Core already take a dependency on this so the framework is emitting relevant diagnostic events which you can listen to and observe from your own application code. The implementation within ASP.NET Core will also create an Activity for the entire request/response cycle if it detects that a listener/observer has been attached.
Next, we move on to look at Tracing. Mark tells us that tracing is more like logging than is it like metrics, although it can be similar to the activities of DiagnosticSource
, in that traces usually follow and show a single user's journey through the entire stack of the application.
We look at how we can persist and store our metrics, logs and events. Mark uses InfluxDB to store Diagnostics events. It's an open source time-series database meaning that temporal data and queries on that data are first class citizens. Mark mentions that Prometheus.io is an alternative time-series database that can be used. There are differences between the two and these revolve mostly around the fact that InfluxDB expects the application to send diagnostic events to it, whereas Prometheus will call back into the application to collect Diagnostic events from it. Although this seems quite strange for a database to be calling into the application to request diagnostic event data from the application, it can actually be beneficial and allows for a specific use case that is unsupported by the more traditional means of writing to the database, that of Prometheus being able to detect your application being unresponsive in the event that your application doesn't provide events upon request. Mark tells us that he's written his own InfluxDB client and DiagnosticSource Listener for .NET which is available on Github. Mark says that he's started to write his own listener due to the face that the official InfluxDB client for .NET is not very performant and causes many unnecessary memory allocations every time it's called.
We finish the talk by summarising. Measure everything, use a time-series database for persistence, use Grafana to create dashboards over the time-series database. And when writing .NET code, consider DiagnosticSource
which is a great abstraction for implementing your own metrics and traces and has the added benefits of providing events from upstream sources (i.e. ASP.NET, .NET Core framework ) too.
After Mark's session was over, all of the conference attendees gathered in the main communal area for the end of day wrap up. The sponsors, organisers, volunteers and attendees were all thanked for another excellent DDD Southwest conference. There was the usual prize draw based upon attendee provided session feedback with some nice prizes available. Alas, I didn't win anything myself.
There was the opportunity to gather at the nearby Just Eat offices for pizza and beers after the conference, but I'd had a great day and had along drive home ahead of me, so I decided to make my way back to the car and head back home. Once again, DDD Southwest had been an excellent conference and here's looking forward to another great conference next year.