DDD North 2014 In Review

Outside the Entrance This past Saturday, 18th October 2014, saw another DDD (Developer, Developer, Developer) event.  This one was the 4th annual DDD North event, this year held at the University Of Leeds.

Communal Area After arriving and signing in, I proceeded through the corridors to the communal area where we were all greeted with a cup of coffee (or tea) and a nice Danish pastry!  It’s always a nice surprise to get a nice cake with your morning coffee, so although I wasn’t really hungry as I’d recently eaten a large breakfast, I decided that a Danish Pastry covered in sweet, sweet icing was too much of a temptation to be able to refuse!Danish Pastries  After this delightful breakfast, I headed down the corridor for the first of the day’s sessions.

The first session of the day is Liam Westley’sAn Actor’s Life For Me” which talks about parallel processing with multiple threads using the Task Parallel Library and utilising the Actor Model.  Liam introduces the Actor model and states it was first described by Carl Hewitt as early as 1973.  The dilemma we have for parallel processing is due to shared state, causing us to lock around areas of memory where multiple threads may try to access that state.  The Actor model solves this by not having shared state within the system, instead having each process take stateless data that is not shared and outputting stateless data to the next process in the processing pipeline.  Liam uses an analogy of making a cup of tea and the steps involved in that whilst also getting an itch that needs scratching whilst making that cup of tea.  The itch (and thus the scratch) can happen during any of the tea-making steps, thus increasing the combinations of how alternating between making tea and scratching can grow exponentially.Liam Westley's Actor Pattern

Liam talks about how CPU’s have been multi-threaded and multi-core for many years now, first arriving around the same time as .NET v1.0, whilst in the same time frame, our developer tools haven’t really kept up.  .NET 1.0 pretty much gave us raw access to how windows handles threads using the TheadPool, which meant managing multiple threads and sharing state between them was very difficult.  .NET 2.0 gave us a SynchronizationContext, but multi-threaded programming was still very hard.  Eventually, we got the much simplified Async & Await keywords, but now we have the Task Parallel Library which provides us with the Actor pattern.  This basically allows us to write our code in individual “blocks” which are essentially black boxes sharing no state with any other block.  We can then chain these blocks together into a processing pipeline, giving us the ability to perform some computational process without sharing state.

Liam then shows us a demo of a console application which produces an MD5 hash for a number of large files in a folder.  The first  iteration of the demo shows this happening without using the Task Parallel Library (TPL) and so performs no parallel processing and simply processes each file, one at a time on a single thread, taking some time to complete.  The second iteration Liam shows us uses the TPL, but still only works in a single-threaded manner by wrapping the hash calculation function as a TPL ActionBlock.  This iteration does the same as the single-threaded version, as again, no parallel processing is occurring.  The final iteration runs in a multi threaded manner by simply setting the block configuration (ExecutionDataFlowBlockOptions) property of MaxDegreeOfParallelism.  What’s really amazing about these ActionBlocks is that they inherently and implicitly handle all input and output buffering and queuing by themselves. This means we can add many blocks into the processing pipeline at a faster rate than they can be executed, and the TPL will handle the queuing for us.

20141018_095624 Liam next talks about separating the processing and calculating of the file hashes by performing these in a TransformBlock rather than an ActionBlock, and only using ActionBlocks to print the hash value to the UI.  The output of the TransformBlock (the hash value and the filename) is passed to the ActionBlock in the processing pipeline.

Liam then introduces the BufferBlock.  This acts as a propagator between other blocks and a FIFO queue of data.  Liam talks about how, in our example, we can add a BufferBlock in front of all of the TransformBlocks which will effectively evenly distribute the “load” as we provide the files to be hashed between the TransformBlocks. 

Next, Liam shows how we can use the LinkTo method which allows us to filter the passing of blocks along the processing pipeline, as the LinkTo method allows us to pass a predicate to perform the filtering.  This could be used (for example) to hash files of different types by different TransformBlocks (i.e. an MP3 file is processed differently than an MP4 file etc.).  Liam also introduces the TransformManyBlock which takes an IEnumerable of things to process.  This means we no longer have to have our own loop through each of the files to be processed, instead, we can simply pass in the contents of the folder’s files as a complete IEnumerable collection.

Finally, Liam mentions both the BroadcastBlock and the BatchBlock.  The Broadcast block is effectively a pub/sub mechanism as used in Message Buses etc. which allows fanning-out of the messages and broadcasting to other blocks.  The BatchBlock allows batching of inputs before passing the messages along the processing pipeline.

All in all, Liam’s talk was very informative and shows just how far we’ve come in our ability to relatively easily and simply perform parallel processing in a multi-threaded manner, taking advantage of all of the cores available to us on a modern day machine.  Liam’s demo code has been made available on GitHub for those interested in learning more.

 

20141018_110411 The next talk is Ian Cooper’sNot Just Layers! – What can pipelines and events do for you?”, which is a talk about Data Flow Architectures, and specifically Pipelines and Events.  Ian first talks about general software architecture and how processes evolve from basic application of a skill through to adoption of genuine craftsmanship and best-practices.  Software Architecture has many styles, but a single style can be explained as a series of component and connectors.  Components are the individual parts of an architecture that does something and the connectors are how multiple components talk to each other.

Ian states that Data Flow architectures are more driven by behaviour rather than state, and says that functional languages (such as F#) are better suited to behaviourally modelled architecture, whereas object oriented (OO) languages like C# are better suited to solve state driven processes and architectures.

Ian uses the KWIC (Keyword in context) algorithm, which is how Unix indexes text in its man pages, as the reference for the session.

Ian talks about pipes and filters, and states that it’s a flow of data processing along a pipeline of specific stages.  A push pipeline “pushes” tasks along the pipeline, the pipeline usually consisting of a pump at the front, which pushes data into the pipeline, with a series of filters which are the processing tasks and with each preceding filter responsible for pushing the data to the succeeding filter in the pipeline.  There’s also usually a sink at the end that provides the final end result.  There’s also Pull pipelines, of which .NET’s LINQ is an example, which have each filter further along the pipeline doing the pulling of the data from the previous filter, rather than the previous filter pushing the data on.

20141018_113104 Ian mentions how pipes and filters architecture is similar to a batch sequence architecture (See below for the subtle difference between them).  He talks about how errors that may happen in a long-running sequence that need the entire processing stream to be undo are better suited to a batch sequence architecture than a pipes and filter architecture, due to the more disconnected nature of the pipes and filter architecture.

Ian talks about parallel execution and the potential pub/sub problem of consumers awaiting data and not knowing when the entire workload is completed.  If individual steps are either faster or slower than the preceding or succeeding steps in the chain, this can cause problems with either no data, or too much data to process.  The solution to this problem is to introduce a “buffer” in between steps within the chain.  Such things as Message Queues (i.e. MSMQ, RabbitMQ etc) or in-memory caching mechanisms (such as those provided by tools like Redis) can offer this.

20141018_113427 Ian then show us an in-memory demo of a program using the pipes and filters architecture.  Ian states that, ideally, filters in a pipeline shouldn’t really know about other filters, but its okay for them to be aware of an abstraction of a new filter that’s next in the pipeline, but not the concrete instance of that filter.  Ian uses the KWIC algorithm for the demo code.  Ian shows the same demo using the manual pipeline and filters, and also a LINQ implementation.  The LINQ example has its filters implemented as fluent method calls simply chained together (i.e. TextLines.Shift(x=>x).RemoveNoise(x => x).Sort() etc.).  Ian then show the same example as written in F#.  This shows the pipeline, using F#’s pipeline operator “|>” is even simpler to see from the code that implements it.

Ian shows us the demo code using a message queue (using MSMQ behind the scenes), this shows a pull based pipeline where each filter down the chain pulls messages from a message queue to which messages are posted by the preceding filter in the pipeline chain.  Ian also shows us the pipeline running in a parallel manner, using the Task Parallel Library.  Each filter has distinct Inputs and Outputs defined as BlockingCollection<T> allowing the data to flow in and out, but to be blocked on the individual thread if the next filter in the pipeline isn’t ready to receive that data.

Finally, Ian talks about Batch Sequences and how they differ slightly from a pipes and filters architecture.  He talks about how you did Batch Sequencing many years ago with magnetic tapes being passed from one reel-to-reel processing machine to the next!  The main difference between Batch Sequence and Pipes and Filters is that in a batch sequence, each filter has to complete the entire workload of data before passing everything as output to the next filter in the chain.  By contrast, pipes and filters will have its filter only process one small piece of work or one individual piece of data before passing it down the processing chain.  This means that true pipes and filters is much better suited to being parallelized than a batch sequence architecture.

 

20141018_125418_LLS The next session is Richard Tasker’sBDD and why you should be doing it”.  Richard starts by introducing BDD (Behaviour Driven Development) and where it originated.  It was first proposed by Dan North as a “solution” to some of the failings of TDD such as: Where do you start with TDD? What to test and what not to test? and How much to test in one go?

Richard starts by talking about his first exposures to understanding BDD.  This started with writing expressive names for standard unit tests.  This helps understand what the test is testing and thus, what the code is doing.  I.e. the expression of a behaviour of the code.  It’s from here that we can see how we can make the mental leap from testing and exercising small methods of of code, but a more user-centric behaviour of the overall application.

Richard shows a series of Database Entity Relationship diagrams as the first mechanism he used to design an application used to model car parts and their relation to vehicles.  This had to go through a number of iterations to fully realise the entities involved and their relationships to each other and it wasn’t the most effective way to achieve the overall design.  Using a series of User Stories which could be turned into BDD tests was the way forward.

Richard next introduced the MoSCoW method as the way in which he started writing his BDD tests.  Using this method combined with the new style of user story templates emphasises the behaviour and business function.  Instead of writing “As a <type of user> I want <some functionality> so that <some benefit>”, we instead write, “In order to <achieve some value>, as a <type of user>, I should have <some functionality>”.  The last part of the user story gets the relevant must/should/could/won’t wording in order to help achieve effective prioritization with the customer.

Cynefin_as_of_1st_June_2014 Richard then introduces SpecFlow as his BDD tool of choice.  He shows a simple demo of a single SpecFlow acceptance test, backed by a number of standard unit tests.  Richard says that you probably don’t want to do this for every individual tiny part of your application as this can lead to an abundance of unit tests and further lead to a test maintenance burden.  To help solve this, Richard talks about Decision Frameworks, of which a popular one is called “Cynefin”.   It defines states of Obvious, Chaotic, Complex and Complicated.  Each area of the application and discrete pieces of functionality can be assessed to see which of the four Cynefin states they may fall into.  From here, we can decide how many or how few BDD Acceptance tests are best utilised for that feature to deliver the best return on investment.  Richard says that Acceptance tests are often best used in Complicated & Complex states, but are often less useful in Obvious & Chaotic states.

Richard closes his session with “why” we should be doing BDD.  He talks about many of the benefits of adopting BDD and says that it is a great helper for teams that are new to TDD.  Richard says that BDD helps to reduce communication barriers between the developers and other technical professionals and the perhaps less technical business stakeholders and that BDD also helps with prioritizing which features should be implemented before others.  BDD also helps with naming things and defining the specific behaviours of our application in a more user-oriented way and also helps to define the meaning of “done”. 

 

20141018_131051_LLS After Richard’s talk, it was lunchtime.  Lunch was served in the same communal area where we’d all gathered earlier at breakfast time and consisted to a rather nice sandwich, a bag of crisps and a drink.  It was nice that all three ingredients could be chosen by each individual attendee from a selection available.

20141018_131444_LLS After enjoying this very nice lunch, I decided to skip the Grok talks (these are short, 10 minute talks that generally happen over lunchtime at the various DDD conferences) and get some fresh air outside.  That didn’t last too long, as I found the Pack Horse pub just down the road from the area of the university used for the conference.  This is a pub belonging to a small local microbrewery called The Burley Street Brewhouse.  I decided I had to go in and sneak a cheeky pint of bitter as a lunchtime treat.  It was indeed a lovely pint and afterwards, I headed back to the university and to the DDD North conference.  I went back in via an entrance close to the communal area still housing some conference attendees and realised that a number of sandwiches and crisps were still available for any attendee that wanted 2nd helpings!  I was still a bit peckish after my liquid refreshment (and knowing that I wouldn’t be eating until quite late in the evening at the after conference Geek Dinner) I decided to go for seconds!  After enjoying my second helpings, I headed off for the first session of the afternoon.

 

20141018_143120_LLS The first afternoon session is Andrew MacDonald’sCQRS & Event Sourcing”.  Andrew first talks about the how & why of starting development in a brand new project.  Andrew has his own development project, treevue.com, for which he decided to try out CQRS and event sourcing as they were two new interesting techniques that Andrew believed could help with the development of his software.  treevue.com is a web product which offers virtual data rooms.  Andrew talks about the benefits of CQRS & Event sourcing such as allowing a truly abstracted data storage model, providing domain driven design without noise and that separating reads and writes to the data model via CQRS could open up new possibilities for the software.  Andrew states that it’s not appropriate for everything and quotes Udi Dahan who said that most people who have used CQRS shouldn’t have done so!

CQRS is Command Query Responsibility Segregation and allows commands (processes that alter our data) to be separate from and entirely distinct from Queries (processes that only read our data but don’t change it).  The models behind each of these can be entirely different, even when referring to the same domain entities, so a data model for reading (for example) a Customer type can have a different design when reading than when writing.

Architectures Compared_thumbAndrew talks about the overall architecture of a system that employs CQRS vs. one that doesn’t.  Without CQRS, reads and writes flow through the same layers of our application.  With CQRS, we can have entirely different architectures for reading vs. writing.  Usually the writing architecture is similar to the entire non-CQRS architecture, flowing through many layers including data access, validation layers etc., but often the reading architecture uses a much flatter set of layers to read the data as concerns such as validation are generally not required in this context.  The two separate reading and writing stacks can often even connect to separate databases which provide “eventual consistency” with each other.  This also means reading and writing can scale independently of each other, and given that many apps read far more than write, this can be invaluable.

image19 Andrew then introduces Event Sourcing which, whilst separate and different from CQRS, does play well with it.  Andrew shows a typical relational model of a purchase order with multiple purchase order line item types related to it and a separate shipping info type attached.  This model only allows us to see the state of the order and its data as it stands right now.  Event sourcing shows the timeline of events against the purchase order as each alteration to the entity is stored separately in an event queue/database.  i.e. A line item is added with an (incorrect) quantity of 4.  But corrected with a later event deducting 2 from the line item, leaving a line item with a correct quantity of 2.  This provides us with the ability to not only see how the data looks “right now”, but to be able to create the entire state of the entity model at any given point in time.

Andrew then proceeds to talk about Azure’s role in his treevue app and how he’s utilised Azure’s Table Storage as a first class citizen.  He then shows us a quick demo and some code using EventProcessors and CommandProcessors which effectively implement the CQRS pattern. 

Finally, Andrew shows how he uses something called a “snapshot” when reading domain aggregates, which is effectively a caching layer used to improve performance around building the domain aggregate models from the various events that make up a specific state of the model as at a certain point in time.  This is particularly important when running applications in the cloud and using such technology as Azure Table Storage, as this will only serve back a maximum of 1000 rows per query before you, as the developer, have to make further requests for more data.  Andrew points out that the demo code is available on GitHub for those interested in diving deeper and learning more from his own implementation.

 

20141018_154117_LLS The final session for today is David Whitney’sLessons Learnt running a public API”.  David is a freelance consultant who has worked for many companies writing large public API’s.  The company used for reference during David’s talk is the work he did with Just Giving.  David states how the project to build the Just Giving API grew so large that the API eventually became the company’s biggest revenue stream.

David’s talk is a fast-paced set of tips, tricks and lessons that he has personally learned over the many years working with clients developing their large public-facing API’s.

David starts with stating that your API is your public facing contract to the world, and that it will live or die by the strength of it’s documentation.  If it’s bad, people will write bad implementations, and you can’t blame them when that happens.  Documentation for APIs can either be created first, which then drives the design of the API, or it can be performed the other way around, where you write the API first and document it afterwards.  Either approach is viable, so long as documentation does indeed exist and is sufficiently comprehensive to allow your consumers to build quality implementations of your API.  David says it’s often best to host the docs with the API itself so that if you hit the API endpoint with a web browser as a human user, you’ll serve up the API documentation.

David states that the DTO’s returned from API calls should provide “examples” of themselves.  This is a simple mechanism that lets users “discover” your API and helps them to understand just how they should use it.  Code such as this:

public interface IProvideAnExampleOf<TMyself>
{
    ExampleOf<TMyself>[] BuildExample();
}

public class ExampleOf<T>
{
    public string Description { get; set; }
    public T Example { get; set; }

    public ExampleOf(string description, T example)
    {
        Description = description;
        Example = example;
    }
}

will enable your API to provide examples of itself to your users.  David states that anything you can do to help your API consumers will greatly cut down the inevitable avalanche of help requests that will hit you.

Following on from individual examples, it’s good to have your API and it’s documentation provide “recipes” for how to use large sections of your API and how to call discrete service endpoints in a coherent chain in order to achieve a specific outcome.  Recipes help your users to “fall into the pit of success”.  Providing things like a complete web application, ideally written in multiple languages, that exercises various parts of your API is even better.

David next talks about versioning of your API, and says that it’s something you have to ensure you have a policy on from Day 1.  Retrofitting versioning is very hard and often leads to broken or awkward implementations.  Adding version numbers to the URI is perhaps the easiest to achieve, but it’s not really the best approach.  It’s far better to add the API version in the HTTP header.

He continues by talking about modifying existing API calls.  Don’t.  Just don’t do it at any cost!  If you really must, you can add additional data to the return values of your API endpoints, but you must never change or remove anything that’s already there.  You must also never rename anything.  If you need to do any of this, use a new version.  This leads into Content Types, and here David states that you’ll really need to provide all the different content types that people will realistically use.  Whilst many web developers today see JSON as the de-facto standard, many companies – especially large enterprises – are still using XML as their de-facto standard.  Your API is going to have to support both.  David also mentions that JSONP is another, growing, standard that you may well have to support, but be careful if you do as you’ll need to be mindful of possible errors caused by CORS (Cross Origin Resource Sharing) which is the ability of resources such as JavaScript to be able to be called from domains other than the one where the resource is hosted.

David talks about the importance of making Statistics for your API available and public.  You need to ensure you’re gathering performance and other statistics on every method call.  One possibility is returning some statistics back to the consumer directly in the HTTP response header after every request to your API, such as the server name that serviced the consumer’s request.  This is especially useful if you’ve got a large server farm and need help debugging service call issues.  Also you should ensure you publically expose your statistics in a dashboard via status updates, uptime pages and more.  For one, it’ll help you deflect any criticisms that your performance is broken, and it’ll provide consumers with confidence that your API is up, that it stays up and that you’re on top of maintaining this.  (Unless, of course, your performance really is broken in which case that same fancy dashboard will help you have visibility into diagnosing and correcting the issue!).  David next mentions the importance of a good staging server for user testing.  Don’t simply expose an internal “test” server that you may have cobbled together.  David relates first hand experience of just how difficult it can be getting users to stop using your “test” server after you’ve allow them access!

20141018_162628_LLS The next part of the session focuses on the overall approach to design of your API.  David stresses that it’s good to go back and read the original documentation on RESTful architecture, written by Roy Fielding as a doctoral dissertation back in the year 2000.  Further, it’s important to lean on existing conventions – always return canonical URI’s rather than relative ones and always supply ID’s and URI’s when returning data that refers to any domain or service entity.  As well as ensuring you follow existing standards, it’s also important to investigate new, emerging standards too.  Standards such as HAL (Hypertext Application Language) and JSON API can ensure that should such standards quickly become mainstream, you can adapt your API to support them.

David continues his session by talking about the cardinal sins of API design.  First thing you must never do is this:

{
    "PageType": 1,
    "SomeText": "This is some text"
}

What, exactly, is PageType 1?  We’re talking, of course, about magic numbers.  Don’t do it.  This forces your consumers to go off and look it up in the documentation, and whilst that documentation should definitely exist, there’s no reason why you can’t provide a more meaningful value to your consumer.  You have to think like a consumer at all times and try to imagine the applications they’re going to build using your API.  Also, don’t ever ask a user for data that your API itself can’t supply – i.e. Don’t ever request some specific identifier for a resource if you don’t provide that identifier when returning that resource in other requests.  Build your services RESTfully, don’t build XML-RPC with SOAP envelopes.  Be resource oriented, and always ensure you use the correct HTTP verbs for all of your services actions – especially understand the difference between POST & PUT.

Make sure you understand multi-tenancy and how that will impact the design and implementation of your API.  Good load balancers and proxies can balance based on request headers, so it’s really easy and useful to provide multi-tenancy in this manner.  Also ensure you use a good sandbox environment for testing and don’t forget to implement good rate limiting!   Users and consumers will make mistakes in their code and you don’t want them to take down your service when they do.

David talks about error handling and says you should validate everything you can when requests are made to your API.  Try to return errors in batches if possible, and always make sure that error messages are useful and readable.  Similar the magic numbers above, don’t return only an arcane error code to your consumers and force them to have to cross reference it from deep within your documentation.

20141018_163740_LLS David moves onto authentication for your API and states that this is an area that can get a bit painful.  Basic HTTP Auth will get you going, and can be sufficient if your API is (and will remain) fairly small scale, however, if your API is large or likely to grow to a larger scale – and especially if your API will be used by users via third-parties, you’ll quickly grow out of Basic Auth and need something more robust.  He says that OpenAuth is the best worst alternative.  It provides good security but can be painful to implement.  Fortunately, there are many third-party providers out there to whom you can outsource your authorisation concerns.

David then discusses providing support for your API to your users.  He says the best approach is to simply put it all out there in the public domain.  This provides transparency which is a good thing, but can also encourage a “self-service” model where people within the community will start to help provide answers and solutions to other community members.  Something as simple as a Google Group or a tag on Stack Overflow can get you started.

David closes his session by stating that, as your API grows over time, always ensure that you’re never attempting to serve only a single customer.  Keep your API clean and generic and it will remain useful to all consumers, rather than compromising that usefulness for just a minority of users.  And finally, if your API is or will become a first-class product for your business, just as the Just Giving API became for them, make sure you have a full product team within your business to deal with its day to day operation and its ongoing maintenance and development.  It’s all too easy to think that the API isn’t strictly a “product” due to its highly technical and slightly opaque nature, however, doing so would be a mistake.

 

20141018_173357_LLS After David’s session, we all congregated in the main lecture theatre for the wrap up presentation from Andy Westgarth, one of the conference organisers.  This involved thanking the very generous sponsors of the event as without them there simply wouldn’t be a DDD conference, and it also involved a prize giving session – the prizes consisting of books, T-shirts, some Visual Studio headphones and a main prize of a Surface Pro 3!

After the excellent day, I headed to the pub which was very conveniently located immediately across the road from the venue entrance.  I had a few hours to kill until the Geek Dinner which was to be held later that evening at Pizza Express in Leed’s Corn Exchange.  I enjoyed a couple of pints of Leeds Pale Ale before heading off to the Pizza Express venue for my dinner.

20141018_224309_LLS The Geek Dinner was attended by approximately 40 people and a fantastic time was had by all.  I was sat close one of the day’s earlier speakers, Andrew MacDonald, and we had a good old chin wag about past projects, work, and life as a software developer in general.

Overall, the DDD North 2014 event and the Geek Dinner afterwards was a fantastic success, and a great time was had by all.  Andy promised that there’d be another one in 2015, which will be held back up in the North-East of England due to the alternating location of DDD North, so here’s looking forward to another wonderful DDD North conference in 2015.

DDD East Anglia 2014 Review

DDD East Anglia Entrance Well, it’s that time of year again when a few DDD events come around.  This past Saturday saw the 2nd ever DDD East Anglia, bigger and better than last year’s inaugural event.

I’d set off on the previous night and stayed over on the Friday night in Kettering.  I availed myself of Kettering town centre’s seemingly only remaining open pub, The Old Market Inn (the Cherry Tree two doors down was closed for refurbishment) and enjoyed a few pints before heading back to my B&B.  The following morning, after a hearty breakfast, I set off on the approximately 1 hour journey into Cambridge and to the West Road Concert Hall, the venue for this year’s DDD East Anglia.

After arriving at the venue and registering, I quickly grabbed a cup of water before heading off across the campus to the lecture rooms and the first session of the day.

The first session is David Simner’s “OWIN, Katana and ASP.NET vNext – Eliminating the pain of IIS”.  David starts by summing up the existing problems with Microsoft’s IIS Server such as its cryptic error messages when simply trying to create or add a new website through to differing versions with differing support for features on differing OS versions.  e.g. Only IIS 8+ supports WebSockets, and IIS8 requires Windows 8 - it can’t be installed on lower versions of Windows.

David continues by calling out “http.sys” - the core of servicing web requests on Windows.  It’s a kernel-space driver that handles the request, looks at the host headers, url etc. and then finds the user space process that will then service the request.  It’s also responsible for dealing with the cryptography layer for SSL packets.  Although http.sys is the “core” of IIS, Microsoft has opened up http.sys to allow other people to use it directly without going through IIS.

David mentions how some existing technologies already support “self-hosting” meaning they can service http requests without requiring IIS. These technologies include WebAPI, SignalR etc., however, the problem with this self-hosting is that these technologies can’t interoperate this way.  Eg. SignalR doesn’t work within WebAPI’s self-hosting.

David continues by introducing OWIN and Katana.  OWIN is the Open Web Interface for .NET and Katana is a Microsoft implementation of OWIN.  Since OWIN is open and anyone can write their own implementation of it, this opens up the entire “web processing” service on Windows and allow us to both remove the dependence on IIS as well as have many differing technologies easily interoperate within the OWIN framework.  New versions of IIS will effectively be OWIN “hosts” as well as Katana being an OWIN host.  Many other implementation written by independent parties could potentially exist, too.

David asks why we should care about all of this, and states that OWIN just “gets out your way” - the framework doesn’t hinder you when you’re trying to do things.  He says it simply “does what you want” and that it does this due to it’s rich eco-system and community providing many custom developments for hosts, middleware, servers and adapters (middleware is the layer that provides a web development framework, i.e. ASP.NET MVC, NancyFX etc. and an adapter is things like System.Web etc. which serves to pass the raw data from the request coming through http.sys to the middleware layer.)

20140913_101244_LLS The 2nd half of David’s talk is a demo of writing a simple web application (using VS 2013) that runs on top of OWIN/Katana.  David creates a standard “Web Application” in VS2013, but immediately pulls in the Nuget package OwinHost (This is actually Katana!).  To use Katana, we need a class with the “magic” name of “Startup” which Katana looks for at startup and runs it.  The Startup class has a single void method called Configuration that takes an IAppBuilder argument, this method runs once per application run and exists to configure the OWIN middleware layer.  This can include such calls as:

app.UseWecomePage(“/”); 
app.UseWebApi(new HttpConfiguration(blah blah configure WebAPI etc.); 
app.Use<[my own custom class that inherits from OwinMiddleware]>();

David starts with writing a test that checks for access to a non-existent page and ensure it returns a 404 error.  In order to perform this test, we can use a WebApp.Start method (which is part of the Microsoft.Owin.Hosting – This is the Katana implementation of an OWIN Host) and allows the test method to effectively start the web processing “process” in code.  The test can then perform things like:

var httpClient= new Httpclient(); 
var result = httpclient.GetAsync(“http://localhost:5555”); 
Assert.Equal(result.StatusCode, 404);

Using OWIN in this way, though, can lead to flaky tests due to how TCP ports work within Windows and the fact that even when the code has finished executing, it can be a while before windows will “tear down” the TCP port allowing other code to re-use it.  To get around this, we can use another Nuget package, Microsoft.OWIN.Testing, which allows us to effectively bypass sending the http request to an actual TCP port and process it directly in memory.  This means our tests don’t even need to use an actual URL!

David shows how easy it is to write your own middleware layer, which consists of his own custom class (inheriting from OwinMiddleware) which contains a single method that invokes the next “task” in the middleware processing chain, but then returns to the same method to check that we didn’t take too long to process that next method.  (This is easily done as each piece of middleware processing is an async Task allowing us to do things like:

context.Invoke(next middleware processing method).ContinueWith(_ => LogIfWeTookTooLong(context));

Ultimately, the aim with OWIN and Katana, is to make EVERTHING X-copy-able.  Literally no more installing or separately configuring things like IIS.  It can all be done within code to configure your application, which can then be simply x-copy’d from one place to another.

 

  20140913_103920_LLSThe next session up is Pete Smith’s “Beyond Responsive Design – UI for the Modern Web Application”

Pete starts by reminding us how we first built web applications for the desktop, then the mobile phone market exploded and we had to make our web apps work well on mobile phones, each of which had their own screen sizes/resolutions etc.  Pete talks about how normal desktop designed web apps don’t really look well on constrained mobile phone screens.  We first tried to solve it with responsive design, but that often leads to having to support multiple code bases, one for desktop and one for mobile.  Pete says that there’s many problems with web apps.  What do we do with all the screen space on a big desktop screen?  There’s no real design guidelines or principles. 

Pete starts to look at design paradigms on mobile apps and shows how menus work on Android using the Hamburger button that allows a menu to slide out from the side of the screen.  This is doable due to Android devices often having fairly large screens for a mobile device.  However, the concept of menus on iPhones (for example), where the screen is much narrower, don’t slide out (from the side of the screen) but rather slide up from the bottom of the screen.  Pete continues through other UI design patterns like dialogs, header bars and property sheets and how they exist for the same reasons, but are implemented entirely differently on desktops and each different mobile device.  Pete states that some of these design patterns work well, such as hamburger menus, and flyout property sheets (notifications), however, some don’t work so well, such as dialogs that purposely don’t fill the entire mobile device screen, but keep a small border around the dialog.  Pete says that screen real estate is at a premium on a mobile device, so why intentionally reserve a section of the screen that’s not used?

The homogenous approach to modern web app development is to use design patterns that work well on both desktop devices as well as mobile devices.  Pete uses the new Azure portal with its concept of “blades” of information that flyout and stack horizontally, but scroll vertically independently from each other.  This is a design paradigm that works well on both the desktop as well as translating well to mobile device “pages” (think of how android “pages” have header bars that have back and forward buttons).

Pete that shows us a demo of a fairly simple mock-up of the DDD East Anglia website and shows how the exact same design patterns of a hamburger menu (that flies in from the left) and “property sheets” that fly in from the right (used for speaker bio’s etc.) work exactly the same (with responsive design for the widths etc.) on both a desktop web app and on mobile devices such as an iPad.

20140913_113421_LLS Pete shows us the code for his sample application, showing some LESS stylesheets, which he says are invaluable for laying out an application like this as the actual page layout is best achieved by absolutely positioning many of the page elements (the hamburger menu, the header bar, the left-hand menu etc.) using LESS mixins.  The main page uses HTML5 semantic markup and simply includes the headerbar and the menu icons on it, the left-hand menu (that by default is visible on devices with an appropriate width) and an empty <main> section that will contain the individual pages that will be loaded dynamically with JavaScript.

Pete finalises by showing a “full-blown” application that he’s currently writing for his client company to show that this set of design paradigms does indeed scale to a complete large application!  Pete is very passionate about bringing a comprehensive set of working design guidelines and paradigms to the wider masses that he’s started his own open working group to do this, called OWAG – The Open Web Apps Group.  They can be found at:  http://www.github.com/owag

 

20140913_120744_LLS The next session is Matt Warren’s “Performance is a feature!” which tells us that performance of our applications is a first-class feature which should be treated the same as usability and all other basic functionality of our application.  Performance can be applied at every layer of our application from the UI right down to the database or even the “raw metal” of our servers, however, Matt’s talk will focus on extracting the best performance of the .NET CLR (Common Language Runtime) – Matt does briefly touch upon the raw metal, which he calls the “Mechanical Sympathy” layer and mentions to look into the Disruptor pattern which allows certain systems (for example, high frequency trading applications) to scale to processing many millions of messages per second!

Matt uses Stack Overflow as a good example of a company taking performance very seriously, and cites Jeff Atwood’s blog post, “Performance is a feature”, as well as some humorous quotations (See images) as something that can provide inspiration to for improvement.20140913_120734_LLS

Matt starts by asking Why does performance matter?, What do we need to know? and When do we need to optimize performance?

The Why starts by stating that it can save us money.  If we’re hosting in the cloud where we pay per hour, we can save money by extracting more performance from fewer resources.  Matt continues to say that we can also save power by increasing performance (and money too as a result) and furthermore, bad performance can lead to broken applications or lost customers if our applications are slow.

Matt does suggest that we need to be careful and land somewhere in the middle of the spectrum between “optimizing everything all the time” (which can back us into a corner) versus “don’t optimize anything” (the extreme end of the “performance optimization is the root of all evil” approach).  Matt mentions various quotes by famous software architects, such as Rico Mariani from Microsoft who states “Never give up your performance accidentally”.

Matt continues with the “What”.  He starts by saying that “averages are bad” (such as “average response time”), we need to look at the edge cases and the outlier values.  We also need useful and meaningful metrics and numbers around how we can measure our performance.  For web site response times, we can say that most users should see pages load in 0.5 to 1.5 seconds, and that almost no-one should wait longer than 3 seconds, however, how do we define “almost no-one”.  We need absolute numbers to ensure we can accurately measure and profile our performance.  Matt also states that there’s a known fact that if only 1% of pages take (for example) more than 3 seconds to load, much more than 1% of users will be affected by this!

Matt continues with the When?  He says that we absolutely need to measure our performance within our production environment.  This is totally necessary to ensure that we’re measuring based upon “real-world” usage of our applications and everything that entails. 

20140913_123553_LLS Matt talks about the How? of performance.  It’s all about measuring.  Measure, measure, measure!  Matt mentions the Stack Overflow developed “MiniProfiler” for measuring where the time is spent when rendering a complete webpage as well as OpServer, which will profile and measure the actual servers that serve up and process our application.  Matt talks about micro-benchmarking which is profiling small individual parts of our code, often just a single method.  He warns to be careful of the GC (Garbage collector) as this can and will interfere with our measurements and shows some code involving forcing a GC.Collect() before timing the code (usually using a Stopwatch instance) which can help.  He states that allocations (of memory) is cheap but cleaning up after memory is released, isn’t.  Another tool that can help with this is Microsoft’s “PerfView” tool which can be run on the server and will show (amongst lots of other useful information) how and where the Garbage Collector is being called to clean up after you.

Matt finishes up by saying that static classes, although often frowned upon for other reasons, can really help with performance improvements.  He says to not be afraid to write your own tools, citing Stack Overflow’s “Dapper” and “Jil” tools to perform their own database access and JSON processing, which has been, performance-wise, far better for them than other similar tools that are available.  He says the main thing, though, is to “know your platform”.  For us .NET developers, this is the CLR, and understanding its internals on a fundamental and deep level is essential for really maximizing the performance of our own code that runs on top of it.  Matt talks, finally, about how the team at Microsoft learned a lot of performance lessons when building the Roslyn compiler and how some seemingly unnecessary code can greatly help performance.  One example was a method writing to a log file and that adding .ToString() to int values before passing to the logger can prevent boxing of the values, thus having a beneficial knock-on effect on the Garbage Collector.

 

20140913_130008_LLS After Matt’s talk it was time for lunch.  As is the custom at these events, lunch was the usual brown-bag affair with a sandwich, a packet of crisps, some fruit and a bottle of water.  There were some grok talks happening over lunch in the main concert hall, and I managed to catch one given by Iris Classon on Windows Universal application development which is developing XAML based applications for both Windows desktop and Windows Phone.

 

 

20140913_145501_LLS After lunch is Mark Rendle’s “The vNext Big Thing – ASP.NET shrinks down and grows up”.  Mark’s talk is all about the next version of ASP.NET that is currently in development at Microsoft.  The entire redevelopment is based around slimming down ASP.NET and making the entire framework as modular and composable as possible.  This is largely as a response to other web frameworks that already offer this kind of platform, such as NodeJs.  Mark even calls it NodeCS!

Mark states that they’re making a minimalist framework and runtime and that it’s all being developed as fully open source.  It’s built so that everything is shippable as a Nuget package, and it’s all being written to use runtime compilation using the new Roslyn compiler.  One of the many benefits that this will bring is the ability to “hot-swop” components and assemblies that make up a web application without ever having to stop and re-start the application!  Mark gives the answer to “Why are Microsoft doing this?” by stating that it’s all about helping versioning of .NET frameworks, making the ASP.NET framework modular, so you only need to install the bits you need, and improving the overall performance of the framework.

The redevelopment of ASP.NET starts with a new CLR.  This is the “CoreCLR”.  This is a cut-down version of the existing .NET CLR and strips out everything that isn’t entirely necessary for the most “core” functions.  There’s no “System.Web” in the ASP.NET vNext version.  This means that there’s no longer any integrated pipeline and it also means that there’s no longer any ASP.NET WebForms!

As part of this complete re-development effort, we’ll get a brand new version of ASP.NET MVC.  This will be ASP.NET MVC 6.  The major new element to MVC 6 will be the “merging” of MVC and WebAPI.  They’ll now be both one and the same thing.  They’ll also be built to be very modular and MVC will finally become fully asynchronous just as WebAPI has been for some time already.  Due to this, one interesting thing to note is that the ubiquitous “Controller” base class that all of our MVC controllers have always inherited from is now entirely optional!

Mark continues by taking a look at another part of the complete ASP.NET re-boot.  Along with new MVC’s and WebAPI’s, we’ll also get a brand new version of the Entity Framework ORM.  This is Entity Framework 7 and most notable about this is that the entire notion of database first (or designer-driven) database mapping is going away entirely!  It’s code-first only!  There’ll also be no ADO.NET and Entity Framework will now finally feature first-class support for non-SQL databases (i.e. NoSQL/Document databases, Azure Tables).

The new version of ASP.NET will bring with it lots of command line tooling, and there’s also going to be first class support for both Mac and Linux.  The goal, ala NodeJS, is to be able to write your entire application in something as simple as a text editor, with all of the application and configuration code in simple text-based code files.  Of course, the next version of Visual Studio (codenamed, Visual Studio 14) will have full support for the new ASP.NET platform.  Mark also details how the configuration of ASP.NET vNext developed applications will no longer use XML (or even a web.config).  They’ll use the currently popular JSON format instead inside of a new “config.json” file.

Mark proceeds by showing us a quick demo of the various new command line tools which are all named starting with the letter K.  There’s KVM, which is the K Version Manager and is used for managing different versions of the .NET runtime and framework.  Then there is KPM which is the K Package Manager, and operates similar to many other package managers, such as NodeJS’s “npm”, and allows you to install packages and individual components of the ASP.NET stack.  The final command line tool is K itself.  This is the K Runtime, and its command line executable is simply called “K”.  It is a small, lightweight process that is the runtime core of ASP.NET vNext itself. 

Mark then shows us a very quick sample website that consists of nothing more than 2-3 lines of JSON configuration, only 1 line of real actual code (a call to app.UseStaticFiles() within the Startup class’s “Configure” method) and a single file of static html and the thing is up and running, writing the word “Hurrah” to the page.  The Startup.cs class is effectively a single class replacement for the entire web.config and the entire contents of the App_Start folder!   The Configure method of the Startup class is effectively a series of calls to various .UseXXX methods on the app object:

app.UseStaticFiles(); 
app.UseEntityFramework().AddSqlServer(); 
app.UseBrowserLink(); 
etc.

Mark shows us where all the source code is. It’s all right there on public GitHub repositories and the current compiled binaries and packages can be found on myget.org.  Mark closes the talk by showing the same simple web app from before, but now demonstrating that this web app, written using the “alpha” bits from ASP.NET vNext can be run on an Azure website instance quite easily.  He commits his sample code to a GitHub repository that is linked to auto-deploy to a newly created Azure website and lets us watch as Azure pulls down all the required NuGet packages and eventually compiles his simple web application is real-time and spins up the website in his browser!

 

20140913_155842_LLS The final talk of the day is Barbara Fusinska’s “Architecture – Why so serious?” talk. This talk is about Barbara’s belief that all software developers should be architects too.  She starts by asking “What is architecture?”.  There are a number of answers to this question, depending upon who you ask.  Network distribution, Software Components, Services, API’s, Infrastructure, Domain Design.  All of these and more can be a part of architecture. 

Barbara says her talk will be given by showing a simple demo application called “Let’s go out” which is a simple scheduler application.  She will show how architecture has permeated all the different parts of the application.  Barbara starts with the “basics”.  She broaches the subject of application configuration and says how it’s best to start as you mean to go on by using an Ioc Container to manage the relationships and dependencies between objects within the application.

She continues by saying that one of the biggest and most fundamental problems of virtually all applications is how to pass data between the code of our application and the database, and vice-versa.  She mentions ORM’s and suggests that the traditional large ORM’s are often far too complicated and can frequently bog us down with complexity.  She suggests that the more modern Micro-ORM’s (of which there are Dapper, PetaPOCO & Massive amongst others) offer a better approach and are a much more lightweight layer between the code and the data.  Micro-ORM’s “bring SQL to the front” which is, after all, what we use to talk to our database.  Barbara suggests that it’s often better to not attempt to entirely abstract the SQL away or attempt to hide it too much, as can often happen with a larger, more fully-featured ORM tool.  On the flip-side, Barbara says that full-blown ORMs will provide us with an implicit unit of work pattern implementation and are better suited to Domain driven design within the database layer.  For Barbara’s demo application, she uses Mark Rendle’s Simple.Data micro-ORM.

Barbara says that the Repository pattern is really an anti-pattern and that it doesn’t really do much for your application.  She talks about how repositories often will end up with many, many methods that are effectively doing very similar things, and are used in only one place within our application.  For example, we often end up with “FindCustomersByID”, “FindCustomersByName”, “FindCustomerByCategory” etc. that all effectively select data from the customers database table and only differ by how we filter the customers.

Barbara shows how her own “read model” is a single class that deals with only reading data from the database and actually lives very close to the code that will use it, often an MVC controller action.  This is similar to a CQRS pattern and the read model is very separate and distinct from the domain model.  Barbara shows how she uses a “command pattern” to provide the unit of work and the identity pattern for the ORM.  Barbara talks about the Services within her application and how these are very much all based upon the domain model.  She talks about only exposing a method to perform some functionality, rather than exposing properties for example.  This not just to the user, but to other programmers who might have access to our classes.  She makes the property accessors private to the class and only allows access to them via a public method.  She shows how her application allows moving a schedule entry, but the business rules should only allow it to be moved forward in time.  Exposing DateTime properties would allow setting any dates and times, including those in the past and thus violating the domain rules.  By only allowing these properties to be set via a public method, which performs this domain validation, the setting of the dates and times can be better controlled.

Barbara says that the Command pattern is actually a better approach than using Services as they can greatly reduce dependencies within things like MVC Controllers.  Rather than having dependencies on multiple services like this:

public void MyCustomerOrderController(ICustomerService customerService, IOrderService orderservice, IActivityService activityService)
{
   ...
}

Where this controller’s purpose is to provide a mechanism to work with Customers, the orders placed by those customers and the activity on those orders.  We can, instead, “wrap” these services up into commands.  These commands will, internally, use multiple services to implement a single domain “command” like so:

public void MyCustomerOrderController(IAddActivityToCustomerOrderCommand addActivityCommand)
{
   ...
}

Providing a single domain command to perform the specific domain action.  This means that the MVC Controller that’s used for the UI that allows customers to be added to activities only has one dependency, the Command class itself.

 

20140913_163951_LLS With the final session over, it was time to head back to the main concert hall to wrap up the days proceedings, thank all those who were involved in the event and to distribute the prizes, generously donated by the various event sponsors.  No prizes for me this time around, although some very lucky attendees won quite a few prizes each!

After the wrap up there was a drinks reception in the same concert hall building, however, I wasn’t able to attend this as I had to set off on the long journey back home.  It was another very successful DDD event, and I can’t wait until they do it all over again next year!

I’m now an MCSD in Application Lifecycle Management!

MCSD_2013(rgb)_1509Well, after previously saying that I’d would give the pursuit of further certifications a bit of a rest, I’ve gone and acquired yet another Microsoft Certification.  This one is Microsoft Certified Solutions Developer – Application Lifecycle Management.

It all started around the beginning of January this year when Microsoft sent out an email with a very special offer.  Register via Microsoft’s Virtual Academy and you would be sent a 3-for-1 voucher for selected Microsoft exams.  Since the three exams required to achieve a Microsoft Certified Solutions Developer – Application Lifecycle Management exams were included within this offer, I decided to go for it.  I’d pay for only the first exam and get the other two for free!

So, having acquired my voucher code, I proceeded to book myself in for the first of the 3 exams.  “Administering Visual Studio Team  Foundation Server 2012” was the first exam which I’d scheduled for the beginning of February.  Although I’d had some previous experience of setting up, configuring and administrating Team Foundation Server, that was with the 2010 version of the product.  I realised I needed to both refresh and update by skills.  Working on a local copy of TFS 2012 and following along with the “Applying ALM with Visual Studio 2012 Jumpstart” course on Microsoft’s Virtual Academy site, as well as studying with the excellent book, “Professional Scrum Development with Microsoft Visual Studio 2012” that is recommended as a companion/study guide for the MCSD ALM exams, I quickly got to work.

I sat and passed the first exam in early February this year.  Feeling energised by this, I quickly returned to the Prometric website to book the second of the three exams, “Software Testing with Visual Studio 2012”, which was scheduled for March of this year.  I’d mistakenly thought this was all about unit testing within Visual Studio, and whilst some of that was included in this course, it was really all about Visual Studio’s “Test Manager” product.  The aforementioned Virtual Academy course and the book covered all of the this course’s content, however, so continued study with those resources along with my own personal tinkering helped me tremendously.  When the time came I sat the exam and amazingly, passed with full marks!

So, with 2 exams down and only 1 to go, I decided to plough on and scheduled my third and final exam for late in April.  This final exam was “Delivering Continuous Value with Visual Studio 2012 Application Lifecycle Management” and was perhaps the most abstract of all of the exams, focusing on agility, project management and best practices around the “softer” side of software development.  Continued study with the aforementioned resources was still helpful, however, when the time came to sit the exam, I admit that I felt somewhat underprepared for this one.  But sit the exam I did, and whilst I ended up with my lowest score from all three of the exams, I still managed to score enough to pass quite comfortably.

So, with all three exams sat and passed, I was awarded the “Microsoft Certified Solution Developer – Application Lifecycle Management” certification.  I’ll definitely slow down with my quest for further certifications now….Well, unless Microsoft send me another tempting email with a very “special” offer included!