DDD East Anglia 2017 In Review

IMG_20170916_083642xThis past Saturday, 16th September 2017, the fourth DDD East Anglia event took place in Cambridge.  DDD East Anglia is a relatively new addition to the DDD event line-up but now it’s fourth event sees it going from strength to strength.

IMG_20170917_091108I’d made the long journey to Cambridge on the Friday evening and stayed in a local B&B to be able to get to the Hills Road College where DDD East Anglia was being held on the Saturday.  I arrived bright and early, registered at the main check-in desk and then proceeded to the college’s recital room just around the corner from the main building for breakfast refreshments.

After some coffee, it was soon time to head back to the main college building and up the stairs to the room where the first session of the day would commence.  My first session was to be Joseph Woodward’s Building A Better Web API With CQRS.

IMG_20170916_091329xJoseph starts his session by defining CQRS.  It’s an acronym, standing for Command Query Responsibility Segregation.  Fundamentally, it’s a pattern for splitting the “read” models from your “write” models within a piece of software.  Joseph points out that we should beware when googling for CQRS as google seems to think it’s a term relating to cars!

CQRS was first coined by Greg Young and it’s very closely related to a prior pattern called CQS (Command Query Separation), originally coined by Bertrand Meyer which states that every method should either be a command which performs an action, or a query which returns data to the caller, but never both.  CQS primarily deals with such separations at a very micro level, whilst CQRS primarily deals with the separations at a broader level, usually along the seams of bounded contexts.  Commands will mutate state and will often be of a “fire and forget” nature.  They will usually return void from the method call.  Queries will return state and, since they don’t mutate any state are idempotent and safe.  We learn that CQRS is not an architectural pattern, but is more of a programming style that simply adheres to the the separation of the commands and queries.

Joseph continues by asking what’s the problem with some of our existing code that CQRS attempts to address.   We look at a typical IXService (where X is some domain entity in a typical business application):

public class ICustomerService
{
     void MakeCustomerPreferred(int customerId);
     Customer GetCustomer(int customerId);
     CustomerSet GetCustomersWithName(string name);
     CustomerSet GetPreferredCustomers();
     void ChangeCustomerLocale(int cutomerId, Locale newLocale);
     void CreateCustomer(Customer customer);
     void EditCustomerDetails(CustomerDetails customerDetails);
}

The problem here is that the interface ends up growing and growing and our service methods are simply an arbitrary collection of queries, commands, actions and other functions that happen to relate to a Customer object.  At this point, Joseph shares a rather insightful quote from a developer called Rob Pike who stated:

“The bigger the interface, the weaker the abstraction”

And so with this in mind, it makes sense to split our interface into something a bit smaller.  Using CQRS, we can split out and group all of our "read" methods, which are our CQRS queries, and split out and group our "write" methods (i.e. Create/Update etc.) which are our CQRS commands.  This will simply become two interfaces in the place of one, an ICustomerReadService and an ICustomerWriteService.

There's good reasons for separating our concerns along the lines of reads vs writes, too.  Quite often, since reads are idempotent, we'll utilise heavy caching to prevent us from making excessive calls to the database and ensure our application can return data in as efficient a manner as possible, whilst our write methods will always hit the database directly.  This leads on to the ability to have entirely different back-end architectures between our reads and our writes throughout the entire application.  For example, we can scale multiple read replica databases independently of the database that is the target for writes.  They could even be entirely different database platforms.

From the perspective of Web API, Joseph tells us how HTTP verbs and CQRS play very nicely together.  The HTTP verb GET is simply one of our read methods, whilst the verbs PUT, POST, DELETE etc. are all of our write concerns.  Further to this, Joseph looks at how we can often end up with MVC or WebAPI controllers that require services to be injected into them and often our controller methods end up becoming bloated from having additional concerns embedded within them, such as validation.  We then look at the command dispatcher pattern as a way of supporting our separation of reads and writes and also as a way of keeping our controller action methods lightweight.

There are two popular frameworks that implement the command dispatcher pattern in the .NET world. MediatR and Brighter.  Both frameworks allow us to define our commands using a plain old C# object (that implements specific interfaces provided by the framework) and also to define a "handler" to which the commands are dispatched for processing.  For example:

public class CreateUserCommand : IRequest
{
     public string EmailAddress { get; set; }
     // Other properties...
}

public class CreateUserCommandHandler : IAsyncRequestHandler<CreateUserCommand>
{
     public CreateUserCommandHandler(IUserRepository userRepository, IMapper mapper)
     {
          _userRepository = userRepository;
          _mapper = mapper;
     }

     public Task Handle(CreateUserCommand command)
     {
          var user = _userRepository.Map<CreateUserCommand, UserEntity>(command);
          await _userRepository.CreateUser(user);
     }
}

Using the above style of defining commands and handlers along with some rudimentary configuration of the framework to allow specific commands and handlers to be connected, we can move almost all of the required logic for reading and writing out of our controllers and into independent, self-contained classes that perform a single specific action.  This enables further decoupling of the domain and business logic from the controller methods, ensuring the controller action methods remain incredibly lightweight:

public class UserController : Controller
{
     private readonly IMediator _mediator;
	 
	 public UserController(IMediator mediator)
	 {
	      _mediator = mediator;
	 }
	 
	 [HttpPost]
	 public async Task Create(CreateUserCommand user)
	 {
	      await _mediator.Send(user);
	 }
}

Above, we can see that the Create action method has been reduced down to a single line.  All of the logic of creating the entity is contained inside the handler class and all of the required input for creating the entity is contained inside the command class.

Both the MediatR and Brighter libraries allow for request and post-request pre-processors.  This allows defining another class, again deriving from specific interfaces/base classes within the framework, which will be invoked before the actual handler class or immediately afterwards.  Such pre-processing if often a perfect place to put cross-cutting concerns such as validation:

public class CreateUserCommandValidation : AbstractValidation<CreateUserCommand>
{
     public CreateUserCommandValidation()
	 {
	      RuleFor(x => x.EmailAddress).NotEmpty().WithMessage("Please specify an email address");
	 }
}

The above code shows some very simple example validation, using the FluentValidation library, that can be hooked into the command dispatcher framework's request pre-processing to firstly validate the command object prior to invoking the handler, and thus saving the entity to the database.

Again, we've got a very nice and clean separation of concerns with this approach, with each specific part of the process being encapsulated within it's own class.  The input parameters, the validation and the actual creation logic.

Both MediatR and Brighter have an IPipelineBehaviour interface, which allows us to write code that hooks into arbitrary places along the processing pipeline.  This allows us to implement other cross-cutting concerns such as logging.  Something that's often required at multiple stages of the entire processing pipeline.

At this point, Joseph shares another quote with us.  This one's from Uncle Bob:

"If your architecture is based on frameworks then it cannot be based on your use cases"

From here, Joseph turns his talk to discuss how we might structure our codebases in terms of files and folders such that separation of concerns within the business domain that the software is addressing are more clearly realised.  He talks about a relatively new style of laying out our projects called Feature Folders (aka Feature Slices).

This involves laying out our solutions so that instead of having a single top-level "Controllers" folder, as is common in almost all ASP.NET MVC web applications, we instead have multiple folders named such that they represent features or specific areas of functionality within our software.  We then have the requisite Controllers, Views and other folders underneath those.   This allows different areas of the software to be conceptually decoupled and kept separate from the other areas.  Whilst this is possible in ASP.NET MVC today, it's even easier with the newer ASP.NET Core framework, and a NuGet package called AddFeatureFolders already exists that enables this exact setup within ASP.NET Core.

Joseph wraps up his talk by suggesting that we take a look at some of his own code on GitHub for the DDD South West website (Joseph is one of the organisers for the DDD South West events) as this has been written using the the CQRS pattern along with using feature folders for layout.

IMG_20170916_102558After Joseph's talk it's time for a quick coffee break, so we head back to the recital room around the corner from the main building for some liquid refreshment.  This time also accompanied by some very welcome biscuits!

After our coffee break, it's time to head back to the main building for the next session.  This one was to be Bart Read's Client-Side Performance For Back-End Developers.

IMG_20170916_103032Bart's session is all about how to maximise performance of client-side script using tools and techniques that we might employ when attempting to troubleshoot and improve the performance of our back-end, server-side code.  Bart starts by stating that he's not a client-side developer, but is more of a full stack developer.  That said, as a full stack developer, one is expected to perform at least some client-side development work from time to time.  Bart continues that in other talks concerning client-side performance, the speakers tend to focus on the page load times and page lifecycle, which whilst interesting and important, is a more a technology-centric way of looking at the problem.  Instead, Bart says that he wants to focus on RAIL, which was originally coined by Google.  This is an acronym for Response, Animation, Idle and Load and is a far more user-centric way of looking at the performance (or perhaps even just perceived performance) issue.  In order to explore this topic, Bart states that he learnt JavaScript and built his own arcade game site, Arcade.ly, which uses extensive JavaScript and other resources as part of the site.

We first look at Response.  For this we need to build a very snappy User Interface so that the user feels that the application is responding to them immediately.  Modern web applications are far more like desktop applications written using either WinForms or WPF than ever and users are very used to these desktop applications being incredibly responsive, even if lots of processing is happening in the background.  One way to get around this is to use "fake" pages.  These are pages that load very fast, usually without lots of dynamic data on them, that are shown to the user whilst asynchronous JavaScript loads the "real" page in the background.  Once fully loaded, the real page can be gracefully shown to the user.

Next, we look at Animation. Bart reminds us that animations help to improve the user perception of responsiveness of your user interface.  Even if your interface is performing some processing that takes a few milliseconds to run, loading and displaying an animation that the user can view whilst that processing to going on will greater enhance the perceived performance of the complete application.  We need to ensure that our animations always run at 60 fps (frames per second), anything less than this will cause them to look jerky and is not a good user experience.  Quite often, we need to perform some computation prior to invoking our animation and in this scenario we should ensure that the computation is ideally performed in less than 10 milliseconds.

Bart shares a helpful online utility called CanvasMark which provides benchmarking for HTML5 Canvas rendering.  This can be really useful in order to test the animations and graphics on your site and how they perform on different platforms and browsers

Bart then talks about using the Google Chrome Task Manager to monitor the memory usage of your page.  A lot of memory can be consumed by your page's JavaScript and other large resources.  Bart talks about his own arcade.ly site which uses 676MB of memory.  This might be acceptable on a modern day desktop machine, but it will not work so well on a more constrained mobile device.  He states that after some analysis of the memory consumption, most of the memory was consumed by the raw audio that was decompressed from loaded compressed audio in order to provide sound effects for the game.  By gracefully degrading the quality and size of the audio used by the site based upon the platform or device that is rendering the site, performance was vastly improved.

Another common pitfall is in how we write our JavaScript functions.  If we're going to be creating many instances of a JavaScript object, as can happen in a game with many individual items on the screen, we shouldn't attach functions directly to the JavaScript object as this creates many copies of the same function.  Instead, we should attach the function to the object prototype, creating a single copy of the function, which is then shared by all instances of the object and thus saving a lot of memory.  Bart also warns us to be careful of closures on our JavaScript objects as we may end up capturing far more than we actually need.

We now move onto Idle.   This is all about deferring work as the main concern for our UI is to respond to the user immediately.  One approach to this is to use Web Workers to perform work at a later time.  In Bart's case, he says that he wrote his own Task Executor which creates an array of tasks and uses the builtin JavaScript setTimeout function to slowly execute each of the queued tasks.  By staggering the execution of the queued tasks, we prevent the potential for the browser to "hang" with a white screen whilst background processing is being performed, as can often happen if excessive tasks and processing is performed all at once.

Finally, we look at Load.  A key take away of the talk is to always use HTTP/2 if possible.  Just by switching this on alone, Bart says you'll see a 20-30% improvement in performance for free.  In order to achieve this, HTTP/2 provides us with request multiplexing, which bundles requests together meaning that the browser can send multiple requests the the server in one go.  These requests won't necessarily respond any quicker, but we do save on the latency overhead we would incur if sending of each request separately.  HTTP/2 also provides server push functionality, stream priority and header compression.  It also has protocol encryption, which whilst not an official part of the HTTP/2 specification, is currently mandated by all browsers that support the HTTP/2 protocol, effectively making encryption compulsory.  HTTP/2 is widely supported across all modern browsers on virtually all platforms, with only Opera Mini being only browser without full support, and HTTP/2 is also fully supported within most of today's programming frameworks.  For example, the .NET Framework has supported HTTP/2 since version 4.6.0.  One other significant change when using HTTP/2 is that we no longer need to "bundle" our CSS and JavaScript resources.  This also applies to "spriting" of icons as a single large image.

Bart moves on to talk about loading our CSS resources and he suggests that one very effective approach is to inline the bare minimum CSS we would require to display and render our "above the fold" content with the rest of the CSS being loaded asynchronously.  The same applies to our JavaScript files, however, there can be an important caveat to this.  Bart explains how he loads some of his JavaScript synchronously, which itself negatively impacts performance, however, this is required to ensure that the asynchronously loaded 3rd-party JavaScript - over which you have no control - does not interfere with your own JavaScript as the 3rd-party JavaScript is loaded at the very last moment whilst Bart's own JavaScript is loaded right up front.  We should look into using DNS Prefetch to force the browser to perform DNS Lookups ahead of time for all of the domains that our site might reference for 3rd party resources.  This incurs a one off small performance impact as the page first loads, but makes subsequent requests for 3rd party content much quicker.

Bart warns us not to get too carried away putting things in the HEAD section of our pages and instead we should focus on getting the "above the fold" content to be as small as possible, ideally it should be all under 15kb, which is the size of data that can fit in a single HTTP packet.  Again, this is a performance optimization that may not have noticeable impact on desktop browsers, but can make a huge difference on mobile devices, especially if they're using a slow connection.  We should always check the payload size of our sites and ensure that we're being as efficient as possible and not sending more data than is required.  Further to this, we should use content compression if our web server supports it.  IIS has supported content compression for a long time now, however, we should be aware of a bug that affects IIS version 8 and possibly version 9 which turns off compression for chunked content. This bug was fixed in IIS version 10.

If we're using libraries or frameworks in our page, ensure we only deliver the required parts.  Many of today's libraries are componentized, allowing the developer to only include the parts of the library/framework that they actually need and use.  Use Content Delivery Networks if you're serving your site to many different geographic areas, but also be aware that, if your audience is almost exclusively located in a single geographic region, using a CDN could actually slow things down.  In this case, it's better to simply serve up your site directly from a server located within that region.

Finally, Bart re-iterates.  It's all about Latency.   It's latency that slows you down significantly and any performance optimizations that can be done to remove or reduce latency will improve the performance, or perceived performance, of your websites.

IMG_20170916_102542After Bart's talk, it's time for another coffee break.  We head back to the recital room for further coffee and biscuits and after a short while, it's time for the 3rd session of the day and the last one prior to lunch.  This session is to be a Visual Note Taking Workshop delivered by Ian Johnson.

As Ian's session was an actual workshop, I didn't take too many notes but instead attempted to take my notes visually using the technique of Sketch-Noting that Ian was describing.

Ian first states that Sketch-Noting is still mostly about writing words.  He says that most of us, as developers using keyboards all day long, have pretty terrible hand writing so we simply need to practice more at it.  Ian suggests to avoid all caps words and cursive writing, using a simple font and camel cased lettering (although all caps is fine for titles and headings).  Start bigger to get the practice of forming each letter correctly, then write smaller and smaller as you get better at it.  You'll need this valuable skill since Sketch-Noting requires you to be able to write both very quickly and legibly. 

At this point, I put my laptop away and stopped taking written notes in my text editor and tried to actually sketch-note the rest of Ian's talk, which gave us many more pointers and advice on how to construct our Sketch Notes.  I don't consider myself artistic in the slightest, but Ian insists that Sketch Notes don't really rely on artistic skill, but more on the skill of being able to capture the relevant information from a fast-moving talk.  I didn't have proper pens for my Sketch Note and had to rely solely on my biro, but here in all its glory is my very first attempt at a Sketch Note:

IMG_20170916_125616

IMG_20170916_130218After Ian's talk was over, it was time for lunch.  All the attendees reconvened in the recital room where we could help ourselves to the lunch kindly provided by the conference organizers and paid for by the sponsors.  Lunch was the usual brown bag affair consisting of a sandwich, some crisps a chocolate bar, a piece of fruit and a can of drink.  I took the various items for my lunch and the bag and proceeded to wander just outside the recital room to a small green area with some tables.   It was at this point that the weather decided to turn against us and is started raining very heavily.  I made a hasty retreat back inside the recital room where it was warm and dry and proceeded to eat my lunch there.

There were some grok talks taking place over the lunch time, but considering the weather and the fact the the grok talk were taking place in the theatre room which was the further point from the recital room, I decided against attending them and chose to remain warm and dry instead.

After lunch, it was time to head back to the main building for the next session, this one was to be Nathan Gloyn's Microservices - What I've Learned After A Year Building Systems.

IMG_20170916_135912Nathan first introduces himself and states that he's a contract developer.  As such, he's been involved in two different projects over the previous 12 months that have been developed using a microservices architecture.  We first asked to consider the question of why should we use microservices?  In Nathan's experience so far, he says, Don't!  In qualifying that statement, Nathan states that microservices are ideal if you need only part of a system to scale, however, for the majority of applications, the benefits to adopting a microservices architecture doesn't outweigh the additional complexity that is introduced.

Nathan state that building a system composed of microservices requires a different way of thinking.  With more monolithic applications, we usually scale them by scaling out - i.e. we use the same monolithic codebase for the website and simply deploy it to more machines which all target the same back-end database.  Microservices don't really work like this, and need to be individually small and simple.  They may even have their own individual database just for the individual service.

Microservices are often closely related to Domain-driven Design's Bounded Contexts so it's important to correctly identify the business domain's bounded contexts and model the microservices after those.  Failure to do this runs the risk that you'll create a suite of mini-monoliths rather than true microservices.

Nathan reminds us that we are definitely going to need a messaging system for an application built with a microservice architecture.  It's simply not an option not to use one as virtually all user interactions will be performed across multiple services.  Microservices are, by their very nature, entirely distributed.  Even simple business processes can often require multiple services and co-ordination of those services.  Nathan says that it's important not to build any messaging into the UI layer as you'll end up coupling the UI to the domain logic and the microservice which is something to be avoided.  One option for a messaging system is NServiceBus, which is what Nathan is currently using, however many other options exist.   When designing the messaging within the system, it's very important to give consideration to versioning of messages and message contracts.  Building in versioning from the beginning ensures that you can deploy individual microservices independently rather than being forced to deploy large parts of the system - perhaps multiple microservices - together if they're all required to use the exact same message version.

We next look at the difference between "fat" versus "thin" services.  Thin services generally only deal with data that they "own", if the thin service needs other data for processing, they must request it from the other service that owns that data.  Fat services, however, will hold on to data (albeit temporarily) that actually "belongs" to other services in order to perform their own processing.  This results in coupling between the services, however, the coupling of fat and thin services is different as fat services are coupled by data whereas thin services are coupled by service.

With microservices, cross-cutting concerns such as security and logging become even more important than ever.  We should always ensure that security is built-in to every service from the very beginning and is treated as a first class citizen of the service.  Enforcing the use of HTTPS across the board (i.e. even when running in development or test environments as well as production) helps to enforce this as a default choice.

We then look at how our system's source code can be structured for a microservices based system.  It's possible to use either one source control repository or multiple and there's trade-offs against both options.  If we use a single repository, that's really beneficial during the development phase of the project, but is not so great when it comes to deployment.  On the other hand, using multiple repositories, usually separated by microservice, is great for deployment since each service can be easily integrated and deployed individually, but it's more cumbersome during the development phase of the project.

It's important to remember that each microservice can be written using it's own technology stack and that each service could use an entirely different stack to others.  This can be beneficial if you have different team with different skill sets building the different services, but it's important to remember to you'll need to constantly monitor each of the technology stacks that you use for security vulnerabilities and other issues that may arise or be discovered over time.  Obviously, the more technology stacks you're using, the more time-consuming this will be.

It's also important to remember that even when you're building a microservices based system, you will still require shared functionality that will be used by multiple services.  This can be built into each service or can be separated out to become a microservice in it's own right depending upon the nature of the shared functionality.

Nathan talks about user interfaces to microservice based systems.  These are often written using a SPA framework such as Angular or React.  They'll often go into their own repository for independent deployment, however, you should be very careful that the front-end user interface part of the system doesn't become a monolith in itself.  If the back-end is nicely separated into microservice based on domain bounded contexts, the front-end should be broken down similarly too.

Next we look at testing of a microservice based system.  This can often be a double-edged sword as it's fairly easy to test a single microservice with its known good (or bad) inputs and outputs, however, much of the real-world usage of the system will be interactions that span multiple services so it's important to ensure that you're also testing the user's path through multiple services.  This can be quite tricky and there's no easy way to achieve this.  It's often done using automated integration testing via the user interface, although you should also ensure you do test the underlying API separately to ensure that security can't be bypassed.

Configuration of the whole system can often be problematic with a microservice based system.  For this reason, it's usually best to use a separate configuration management system rather than trying to implement things like web.config transforms for each service.  Tools like Consul or Spring Cloud Config are very useful here.

Data management is also of critical importance.  It should be possible to change data within the system's data store without requiring a deployment.  Database migrations are a key tool is helping with this.  Nathan mentions both Entity Framework Migrations and also FluentMigrator as two good choices.  He offers a suggestion for things like column renames and suggests that instead of a migration that renames the column, create a whole new column instead.  That way, if the change has to be rolled back, you can simply remove the new column, leaving the old column (with the old name) in place.  This allows other services that may not be being deployed to continue to use the old column until they're also updated.

Nathan then touches on multi-tenancy within microservice based systems and says that if you use the model of a separate database per tenant, this can lead to a huge explosion of databases if your microservices are using multiple databases for themselves.  It's usually much more manageable to have multi-tenancy by partitioning tenant data within a single database (or the database for each microservice).

Next, we look at logging and monitoring within our system.  Given the distributed nature of a microservice based system, it's important to be able to log and understand the complete user interaction even though logging is done individually by individual microservices.  To facilitate understanding the entire end-to-end interaction we can use a CorrelationID for this.  It's simply a unique identifier that travels through all services, passed along in each message and written to the logs of each microservice.  When we look back at the complete set of logs, combined from the disparate services, we can use the CorrelationID to correlate the log messages into a cohesive whole.  With regard to monitoring, it's also critically important to monitor the entire system and not just the individual services.  It's far more important to know how healthy the entire system is rather than each service, although monitoring services individually is still valuable.

Finally, Nathan shares some details regarding custom tools.  He says that, as a developer on a microservice based system, you will end up building many custom tools.  These will be such tools as bulk data loading whereby getting the data into the database requires processing by a number of different services and cannot simply be directly loaded into the database.  He says that despite the potential downsides of working on such systems, building the custom tools can often be some of the more enjoyable parts of building the complete system.

After Nathan's talk it was time for the last coffee break of the day, after which it was time for the final day's session.  For me, this was Monitoring-First Development by Benji Weber.

IMG_20170916_152036Benji starts his talk by introducing himself and says that he's mostly a Java developer but that he done some .NET and also writes JavaScript as well.  He works at an "ad-tech" company in London.  He wants to first start by talking about Extreme Programming as this is a style of programming that he uses in his current company.  We look at the various practices within Extreme Programming (aka XP) as many of these practices have been adopted within wider development styles, even for teams that don't consider themselves as using XP.  Benji says that, ultimately, all of the XP practices boils down to one key thing - Feedback.  They're all about getting better feedback, quicker.  Benji's company uses full XP with a very collaborative environment, collectively owning all the code and the entire end-to-end process from design/coding through to production, releasing small changes very frequently.

As part of this style adopted by the teams, it's lead onto the adoption of something they term Monitor-Driven Development.  This is simply the idea that monitoring of all parts of a software system is the core way to get feedback on that system, both when the system is being developed and as the system is running in production.  Therefore, all new feature development starts by asking the question, "How will we monitor this system?" and then ensuring that the ability to deeply monitor all aspects of the system being developed is a front and centre concern throughout the development cycle.

To illustrate how the company came to adopt this methodology, Benji shares three short stories with us.  The first started with an email to the development team with the subject of "URGENT".  It was from some sales people trying to demonstrate some part of the software and they were complaining that the graphs on the analytics dashboard weren't loading for them.  Benji state that this was a feature that was heavily tested in development so the error seemed strange.   After some analysis into the problem, it was discovered that data was the root cause of the issue and that the development team had underestimated the way in which the underlying data would grow due to users doing unanticipated things on the system, which the existing monitoring that the team had in place didn't highlight.  The second story involves the discovery that 90% of the traffic their software was serving from the CDN was HTTP 500 server errors!  Again, after analysis it was discovered that the problem lay in some JavaScript code that a recently released new version of Internet Explorer was interpreting different from the old version and that this new version was caused the client-side JavaScript to continually make requested to a non-existent URL.  The third story involves a report from an irate client that the adverts being served up by the company's advert system was breaking the client's own website.  Analysis showed that this was again caused by a JavaScript issue and a line of code of self = this; that was incorrectly written using a globally-scoped variable, thereby overwriting variables that the client's own website relied upon.  The common theme throughout all of the stories was that the behaviour of the system had changed, even though no code had changed.   Moreover, all of the problems that arose from the changed behaviour were first discovered by the system's user and not the development team.

Benji references Google's own Site Reliability Engineering book (available to read online for free) which states that 70% of the reasons behind things breaking is because you've changed something.  But this leaves a large 30% of the time where the reasons are something that's outside of your control.  So how did Benji approach improving his ability to detect and respond to issues?  He started by looking at causes vs problems and concluded that they didn't have enough coverage of the problems that occurred.  Benji tells us that it's almost impossible to get sufficient coverage of the causes since there's an almost infinite number of things that could happen that could cause the problem.

To get better coverage of the problems, they universally adopted the "5 whys" approach to determining the root issues.  This involves starting with the problem and repeated asking "why?" to each cause to determine the root cause.  An example is, monitoring is hard. Why is it hard since we don't have the same issue when using Test-Driven Development during coding?  But TDD follows a Red - Green - Refactor cycle so you can't really write untestable code etc.

IMG_20170916_155043So Benji decided to try to apply the Test-Driven Development principles to monitoring.  Before even writing the feature, they start by determining how the feature will be monitored, then only after determining this, they start work on writing the feature ensuring that the monitoring is not negatively impacted.  In this way, the monitoring of the feature becomes the failing unit test that the actual feature implementation must make "go green".

Benji shares an example of show this is implemented and says that the "failing test" starts with a rule defined within their chosen monitoring tool, Nagios.  This rule could be something like "ensure that more adverts are loaded than reported page views", whereby the user interface is checked for specific elements or a specific response rendering.  This test will show as a failure within the monitoring system as the feature has not yet been implemented, however, upon implementation of the feature, the monitoring test will eventually pass (go green) and there we have a correct implementation of the feature driven by the monitoring system to guide it.  Of course, this monitoring system remains in place with these tests increasing over time and becoming an early warning system should any part of the software, within any environment, start to show any failures.  This ensure that the development team are the first to know of any potential issues, rather than the users being first to know.

Benji says they use a pattern called the Screenplay pattern for their UI based tests.  It's an evolution of the Page Objects pattern and allows highly decoupled tests as well as bringing the SOLID principles to the tests themselves.  He also states that they make use of Feature Toggles not only when implementing new features and functionality but also when refactoring existing parts of the system.  This allows them to test new monitoring scenarios without affecting older implementations.  Benji states that it's incredibly important to follow a true Red - Green - Refactor cycle when implementing monitoring rules and that you should always see your monitoring tests failing first before trying to make them pass/go green.

Finally, Benji says that adopting a monitoring-driven development approach ultimately helps humans too.  It helps in future planning and development efforts as it builds awareness of how and what to think about when designing new systems and/or functionality.

IMG_20170916_165131After Benji's session was over, it was time for all the attendees to gather back in the theatre room for the final wrap-up by the organisers and the prize draw.  After thanking the various people involved in making the conference what it is (sponsors, volunteers, organisers etc.) it was time for the prize draw.  There were some good prizes up for grabs, but alas, I wasn’t to be a winner on this occasion.  The DDD East Anglia 2017 event had been a huge success and it was all the more impressive given that the organisers shared the story that their original venue had pulled out only 5 weeks prior to the event!  The new venue had stepped in at the last minute and held an excellent conference which was  completely seamless to the attendees.  We would never have known of the last minute panic had it not been shared with us.  Here's looking forward to the next DDD East Anglia event next year.

DDD East Anglia 2014 Review

DDD East Anglia Entrance Well, it’s that time of year again when a few DDD events come around.  This past Saturday saw the 2nd ever DDD East Anglia, bigger and better than last year’s inaugural event.

I’d set off on the previous night and stayed over on the Friday night in Kettering.  I availed myself of Kettering town centre’s seemingly only remaining open pub, The Old Market Inn (the Cherry Tree two doors down was closed for refurbishment) and enjoyed a few pints before heading back to my B&B.  The following morning, after a hearty breakfast, I set off on the approximately 1 hour journey into Cambridge and to the West Road Concert Hall, the venue for this year’s DDD East Anglia.

After arriving at the venue and registering, I quickly grabbed a cup of water before heading off across the campus to the lecture rooms and the first session of the day.

The first session is David Simner’s “OWIN, Katana and ASP.NET vNext – Eliminating the pain of IIS”.  David starts by summing up the existing problems with Microsoft’s IIS Server such as its cryptic error messages when simply trying to create or add a new website through to differing versions with differing support for features on differing OS versions.  e.g. Only IIS 8+ supports WebSockets, and IIS8 requires Windows 8 - it can’t be installed on lower versions of Windows.

David continues by calling out “http.sys” - the core of servicing web requests on Windows.  It’s a kernel-space driver that handles the request, looks at the host headers, url etc. and then finds the user space process that will then service the request.  It’s also responsible for dealing with the cryptography layer for SSL packets.  Although http.sys is the “core” of IIS, Microsoft has opened up http.sys to allow other people to use it directly without going through IIS.

David mentions how some existing technologies already support “self-hosting” meaning they can service http requests without requiring IIS. These technologies include WebAPI, SignalR etc., however, the problem with this self-hosting is that these technologies can’t interoperate this way.  Eg. SignalR doesn’t work within WebAPI’s self-hosting.

David continues by introducing OWIN and Katana.  OWIN is the Open Web Interface for .NET and Katana is a Microsoft implementation of OWIN.  Since OWIN is open and anyone can write their own implementation of it, this opens up the entire “web processing” service on Windows and allow us to both remove the dependence on IIS as well as have many differing technologies easily interoperate within the OWIN framework.  New versions of IIS will effectively be OWIN “hosts” as well as Katana being an OWIN host.  Many other implementation written by independent parties could potentially exist, too.

David asks why we should care about all of this, and states that OWIN just “gets out your way” - the framework doesn’t hinder you when you’re trying to do things.  He says it simply “does what you want” and that it does this due to it’s rich eco-system and community providing many custom developments for hosts, middleware, servers and adapters (middleware is the layer that provides a web development framework, i.e. ASP.NET MVC, NancyFX etc. and an adapter is things like System.Web etc. which serves to pass the raw data from the request coming through http.sys to the middleware layer.)

20140913_101244_LLS The 2nd half of David’s talk is a demo of writing a simple web application (using VS 2013) that runs on top of OWIN/Katana.  David creates a standard “Web Application” in VS2013, but immediately pulls in the Nuget package OwinHost (This is actually Katana!).  To use Katana, we need a class with the “magic” name of “Startup” which Katana looks for at startup and runs it.  The Startup class has a single void method called Configuration that takes an IAppBuilder argument, this method runs once per application run and exists to configure the OWIN middleware layer.  This can include such calls as:

app.UseWecomePage(“/”); 
app.UseWebApi(new HttpConfiguration(blah blah configure WebAPI etc.); 
app.Use<[my own custom class that inherits from OwinMiddleware]>();

David starts with writing a test that checks for access to a non-existent page and ensure it returns a 404 error.  In order to perform this test, we can use a WebApp.Start method (which is part of the Microsoft.Owin.Hosting – This is the Katana implementation of an OWIN Host) and allows the test method to effectively start the web processing “process” in code.  The test can then perform things like:

var httpClient= new Httpclient(); 
var result = httpclient.GetAsync(“http://localhost:5555”); 
Assert.Equal(result.StatusCode, 404);

Using OWIN in this way, though, can lead to flaky tests due to how TCP ports work within Windows and the fact that even when the code has finished executing, it can be a while before windows will “tear down” the TCP port allowing other code to re-use it.  To get around this, we can use another Nuget package, Microsoft.OWIN.Testing, which allows us to effectively bypass sending the http request to an actual TCP port and process it directly in memory.  This means our tests don’t even need to use an actual URL!

David shows how easy it is to write your own middleware layer, which consists of his own custom class (inheriting from OwinMiddleware) which contains a single method that invokes the next “task” in the middleware processing chain, but then returns to the same method to check that we didn’t take too long to process that next method.  (This is easily done as each piece of middleware processing is an async Task allowing us to do things like:

context.Invoke(next middleware processing method).ContinueWith(_ => LogIfWeTookTooLong(context));

Ultimately, the aim with OWIN and Katana, is to make EVERTHING X-copy-able.  Literally no more installing or separately configuring things like IIS.  It can all be done within code to configure your application, which can then be simply x-copy’d from one place to another.

 

  20140913_103920_LLSThe next session up is Pete Smith’s “Beyond Responsive Design – UI for the Modern Web Application”

Pete starts by reminding us how we first built web applications for the desktop, then the mobile phone market exploded and we had to make our web apps work well on mobile phones, each of which had their own screen sizes/resolutions etc.  Pete talks about how normal desktop designed web apps don’t really look well on constrained mobile phone screens.  We first tried to solve it with responsive design, but that often leads to having to support multiple code bases, one for desktop and one for mobile.  Pete says that there’s many problems with web apps.  What do we do with all the screen space on a big desktop screen?  There’s no real design guidelines or principles. 

Pete starts to look at design paradigms on mobile apps and shows how menus work on Android using the Hamburger button that allows a menu to slide out from the side of the screen.  This is doable due to Android devices often having fairly large screens for a mobile device.  However, the concept of menus on iPhones (for example), where the screen is much narrower, don’t slide out (from the side of the screen) but rather slide up from the bottom of the screen.  Pete continues through other UI design patterns like dialogs, header bars and property sheets and how they exist for the same reasons, but are implemented entirely differently on desktops and each different mobile device.  Pete states that some of these design patterns work well, such as hamburger menus, and flyout property sheets (notifications), however, some don’t work so well, such as dialogs that purposely don’t fill the entire mobile device screen, but keep a small border around the dialog.  Pete says that screen real estate is at a premium on a mobile device, so why intentionally reserve a section of the screen that’s not used?

The homogenous approach to modern web app development is to use design patterns that work well on both desktop devices as well as mobile devices.  Pete uses the new Azure portal with its concept of “blades” of information that flyout and stack horizontally, but scroll vertically independently from each other.  This is a design paradigm that works well on both the desktop as well as translating well to mobile device “pages” (think of how android “pages” have header bars that have back and forward buttons).

Pete that shows us a demo of a fairly simple mock-up of the DDD East Anglia website and shows how the exact same design patterns of a hamburger menu (that flies in from the left) and “property sheets” that fly in from the right (used for speaker bio’s etc.) work exactly the same (with responsive design for the widths etc.) on both a desktop web app and on mobile devices such as an iPad.

20140913_113421_LLS Pete shows us the code for his sample application, showing some LESS stylesheets, which he says are invaluable for laying out an application like this as the actual page layout is best achieved by absolutely positioning many of the page elements (the hamburger menu, the header bar, the left-hand menu etc.) using LESS mixins.  The main page uses HTML5 semantic markup and simply includes the headerbar and the menu icons on it, the left-hand menu (that by default is visible on devices with an appropriate width) and an empty <main> section that will contain the individual pages that will be loaded dynamically with JavaScript.

Pete finalises by showing a “full-blown” application that he’s currently writing for his client company to show that this set of design paradigms does indeed scale to a complete large application!  Pete is very passionate about bringing a comprehensive set of working design guidelines and paradigms to the wider masses that he’s started his own open working group to do this, called OWAG – The Open Web Apps Group.  They can be found at:  http://www.github.com/owag

 

20140913_120744_LLS The next session is Matt Warren’s “Performance is a feature!” which tells us that performance of our applications is a first-class feature which should be treated the same as usability and all other basic functionality of our application.  Performance can be applied at every layer of our application from the UI right down to the database or even the “raw metal” of our servers, however, Matt’s talk will focus on extracting the best performance of the .NET CLR (Common Language Runtime) – Matt does briefly touch upon the raw metal, which he calls the “Mechanical Sympathy” layer and mentions to look into the Disruptor pattern which allows certain systems (for example, high frequency trading applications) to scale to processing many millions of messages per second!

Matt uses Stack Overflow as a good example of a company taking performance very seriously, and cites Jeff Atwood’s blog post, “Performance is a feature”, as well as some humorous quotations (See images) as something that can provide inspiration to for improvement.20140913_120734_LLS

Matt starts by asking Why does performance matter?, What do we need to know? and When do we need to optimize performance?

The Why starts by stating that it can save us money.  If we’re hosting in the cloud where we pay per hour, we can save money by extracting more performance from fewer resources.  Matt continues to say that we can also save power by increasing performance (and money too as a result) and furthermore, bad performance can lead to broken applications or lost customers if our applications are slow.

Matt does suggest that we need to be careful and land somewhere in the middle of the spectrum between “optimizing everything all the time” (which can back us into a corner) versus “don’t optimize anything” (the extreme end of the “performance optimization is the root of all evil” approach).  Matt mentions various quotes by famous software architects, such as Rico Mariani from Microsoft who states “Never give up your performance accidentally”.

Matt continues with the “What”.  He starts by saying that “averages are bad” (such as “average response time”), we need to look at the edge cases and the outlier values.  We also need useful and meaningful metrics and numbers around how we can measure our performance.  For web site response times, we can say that most users should see pages load in 0.5 to 1.5 seconds, and that almost no-one should wait longer than 3 seconds, however, how do we define “almost no-one”.  We need absolute numbers to ensure we can accurately measure and profile our performance.  Matt also states that there’s a known fact that if only 1% of pages take (for example) more than 3 seconds to load, much more than 1% of users will be affected by this!

Matt continues with the When?  He says that we absolutely need to measure our performance within our production environment.  This is totally necessary to ensure that we’re measuring based upon “real-world” usage of our applications and everything that entails. 

20140913_123553_LLS Matt talks about the How? of performance.  It’s all about measuring.  Measure, measure, measure!  Matt mentions the Stack Overflow developed “MiniProfiler” for measuring where the time is spent when rendering a complete webpage as well as OpServer, which will profile and measure the actual servers that serve up and process our application.  Matt talks about micro-benchmarking which is profiling small individual parts of our code, often just a single method.  He warns to be careful of the GC (Garbage collector) as this can and will interfere with our measurements and shows some code involving forcing a GC.Collect() before timing the code (usually using a Stopwatch instance) which can help.  He states that allocations (of memory) is cheap but cleaning up after memory is released, isn’t.  Another tool that can help with this is Microsoft’s “PerfView” tool which can be run on the server and will show (amongst lots of other useful information) how and where the Garbage Collector is being called to clean up after you.

Matt finishes up by saying that static classes, although often frowned upon for other reasons, can really help with performance improvements.  He says to not be afraid to write your own tools, citing Stack Overflow’s “Dapper” and “Jil” tools to perform their own database access and JSON processing, which has been, performance-wise, far better for them than other similar tools that are available.  He says the main thing, though, is to “know your platform”.  For us .NET developers, this is the CLR, and understanding its internals on a fundamental and deep level is essential for really maximizing the performance of our own code that runs on top of it.  Matt talks, finally, about how the team at Microsoft learned a lot of performance lessons when building the Roslyn compiler and how some seemingly unnecessary code can greatly help performance.  One example was a method writing to a log file and that adding .ToString() to int values before passing to the logger can prevent boxing of the values, thus having a beneficial knock-on effect on the Garbage Collector.

 

20140913_130008_LLS After Matt’s talk it was time for lunch.  As is the custom at these events, lunch was the usual brown-bag affair with a sandwich, a packet of crisps, some fruit and a bottle of water.  There were some grok talks happening over lunch in the main concert hall, and I managed to catch one given by Iris Classon on Windows Universal application development which is developing XAML based applications for both Windows desktop and Windows Phone.

 

 

20140913_145501_LLS After lunch is Mark Rendle’s “The vNext Big Thing – ASP.NET shrinks down and grows up”.  Mark’s talk is all about the next version of ASP.NET that is currently in development at Microsoft.  The entire redevelopment is based around slimming down ASP.NET and making the entire framework as modular and composable as possible.  This is largely as a response to other web frameworks that already offer this kind of platform, such as NodeJs.  Mark even calls it NodeCS!

Mark states that they’re making a minimalist framework and runtime and that it’s all being developed as fully open source.  It’s built so that everything is shippable as a Nuget package, and it’s all being written to use runtime compilation using the new Roslyn compiler.  One of the many benefits that this will bring is the ability to “hot-swop” components and assemblies that make up a web application without ever having to stop and re-start the application!  Mark gives the answer to “Why are Microsoft doing this?” by stating that it’s all about helping versioning of .NET frameworks, making the ASP.NET framework modular, so you only need to install the bits you need, and improving the overall performance of the framework.

The redevelopment of ASP.NET starts with a new CLR.  This is the “CoreCLR”.  This is a cut-down version of the existing .NET CLR and strips out everything that isn’t entirely necessary for the most “core” functions.  There’s no “System.Web” in the ASP.NET vNext version.  This means that there’s no longer any integrated pipeline and it also means that there’s no longer any ASP.NET WebForms!

As part of this complete re-development effort, we’ll get a brand new version of ASP.NET MVC.  This will be ASP.NET MVC 6.  The major new element to MVC 6 will be the “merging” of MVC and WebAPI.  They’ll now be both one and the same thing.  They’ll also be built to be very modular and MVC will finally become fully asynchronous just as WebAPI has been for some time already.  Due to this, one interesting thing to note is that the ubiquitous “Controller” base class that all of our MVC controllers have always inherited from is now entirely optional!

Mark continues by taking a look at another part of the complete ASP.NET re-boot.  Along with new MVC’s and WebAPI’s, we’ll also get a brand new version of the Entity Framework ORM.  This is Entity Framework 7 and most notable about this is that the entire notion of database first (or designer-driven) database mapping is going away entirely!  It’s code-first only!  There’ll also be no ADO.NET and Entity Framework will now finally feature first-class support for non-SQL databases (i.e. NoSQL/Document databases, Azure Tables).

The new version of ASP.NET will bring with it lots of command line tooling, and there’s also going to be first class support for both Mac and Linux.  The goal, ala NodeJS, is to be able to write your entire application in something as simple as a text editor, with all of the application and configuration code in simple text-based code files.  Of course, the next version of Visual Studio (codenamed, Visual Studio 14) will have full support for the new ASP.NET platform.  Mark also details how the configuration of ASP.NET vNext developed applications will no longer use XML (or even a web.config).  They’ll use the currently popular JSON format instead inside of a new “config.json” file.

Mark proceeds by showing us a quick demo of the various new command line tools which are all named starting with the letter K.  There’s KVM, which is the K Version Manager and is used for managing different versions of the .NET runtime and framework.  Then there is KPM which is the K Package Manager, and operates similar to many other package managers, such as NodeJS’s “npm”, and allows you to install packages and individual components of the ASP.NET stack.  The final command line tool is K itself.  This is the K Runtime, and its command line executable is simply called “K”.  It is a small, lightweight process that is the runtime core of ASP.NET vNext itself. 

Mark then shows us a very quick sample website that consists of nothing more than 2-3 lines of JSON configuration, only 1 line of real actual code (a call to app.UseStaticFiles() within the Startup class’s “Configure” method) and a single file of static html and the thing is up and running, writing the word “Hurrah” to the page.  The Startup.cs class is effectively a single class replacement for the entire web.config and the entire contents of the App_Start folder!   The Configure method of the Startup class is effectively a series of calls to various .UseXXX methods on the app object:

app.UseStaticFiles(); 
app.UseEntityFramework().AddSqlServer(); 
app.UseBrowserLink(); 
etc.

Mark shows us where all the source code is. It’s all right there on public GitHub repositories and the current compiled binaries and packages can be found on myget.org.  Mark closes the talk by showing the same simple web app from before, but now demonstrating that this web app, written using the “alpha” bits from ASP.NET vNext can be run on an Azure website instance quite easily.  He commits his sample code to a GitHub repository that is linked to auto-deploy to a newly created Azure website and lets us watch as Azure pulls down all the required NuGet packages and eventually compiles his simple web application is real-time and spins up the website in his browser!

 

20140913_155842_LLS The final talk of the day is Barbara Fusinska’s “Architecture – Why so serious?” talk. This talk is about Barbara’s belief that all software developers should be architects too.  She starts by asking “What is architecture?”.  There are a number of answers to this question, depending upon who you ask.  Network distribution, Software Components, Services, API’s, Infrastructure, Domain Design.  All of these and more can be a part of architecture. 

Barbara says her talk will be given by showing a simple demo application called “Let’s go out” which is a simple scheduler application.  She will show how architecture has permeated all the different parts of the application.  Barbara starts with the “basics”.  She broaches the subject of application configuration and says how it’s best to start as you mean to go on by using an Ioc Container to manage the relationships and dependencies between objects within the application.

She continues by saying that one of the biggest and most fundamental problems of virtually all applications is how to pass data between the code of our application and the database, and vice-versa.  She mentions ORM’s and suggests that the traditional large ORM’s are often far too complicated and can frequently bog us down with complexity.  She suggests that the more modern Micro-ORM’s (of which there are Dapper, PetaPOCO & Massive amongst others) offer a better approach and are a much more lightweight layer between the code and the data.  Micro-ORM’s “bring SQL to the front” which is, after all, what we use to talk to our database.  Barbara suggests that it’s often better to not attempt to entirely abstract the SQL away or attempt to hide it too much, as can often happen with a larger, more fully-featured ORM tool.  On the flip-side, Barbara says that full-blown ORMs will provide us with an implicit unit of work pattern implementation and are better suited to Domain driven design within the database layer.  For Barbara’s demo application, she uses Mark Rendle’s Simple.Data micro-ORM.

Barbara says that the Repository pattern is really an anti-pattern and that it doesn’t really do much for your application.  She talks about how repositories often will end up with many, many methods that are effectively doing very similar things, and are used in only one place within our application.  For example, we often end up with “FindCustomersByID”, “FindCustomersByName”, “FindCustomerByCategory” etc. that all effectively select data from the customers database table and only differ by how we filter the customers.

Barbara shows how her own “read model” is a single class that deals with only reading data from the database and actually lives very close to the code that will use it, often an MVC controller action.  This is similar to a CQRS pattern and the read model is very separate and distinct from the domain model.  Barbara shows how she uses a “command pattern” to provide the unit of work and the identity pattern for the ORM.  Barbara talks about the Services within her application and how these are very much all based upon the domain model.  She talks about only exposing a method to perform some functionality, rather than exposing properties for example.  This not just to the user, but to other programmers who might have access to our classes.  She makes the property accessors private to the class and only allows access to them via a public method.  She shows how her application allows moving a schedule entry, but the business rules should only allow it to be moved forward in time.  Exposing DateTime properties would allow setting any dates and times, including those in the past and thus violating the domain rules.  By only allowing these properties to be set via a public method, which performs this domain validation, the setting of the dates and times can be better controlled.

Barbara says that the Command pattern is actually a better approach than using Services as they can greatly reduce dependencies within things like MVC Controllers.  Rather than having dependencies on multiple services like this:

public void MyCustomerOrderController(ICustomerService customerService, IOrderService orderservice, IActivityService activityService)
{
   ...
}

Where this controller’s purpose is to provide a mechanism to work with Customers, the orders placed by those customers and the activity on those orders.  We can, instead, “wrap” these services up into commands.  These commands will, internally, use multiple services to implement a single domain “command” like so:

public void MyCustomerOrderController(IAddActivityToCustomerOrderCommand addActivityCommand)
{
   ...
}

Providing a single domain command to perform the specific domain action.  This means that the MVC Controller that’s used for the UI that allows customers to be added to activities only has one dependency, the Command class itself.

 

20140913_163951_LLS With the final session over, it was time to head back to the main concert hall to wrap up the days proceedings, thank all those who were involved in the event and to distribute the prizes, generously donated by the various event sponsors.  No prizes for me this time around, although some very lucky attendees won quite a few prizes each!

After the wrap up there was a drinks reception in the same concert hall building, however, I wasn’t able to attend this as I had to set off on the long journey back home.  It was another very successful DDD event, and I can’t wait until they do it all over again next year!