DDD East Anglia Conference 2013 Write-Up


Sunday 30 Jun 2013 at 21:00
Conferences  |  conferences development dotnet ddd

DDDEA Logo

This past Saturday, 29th June 2013, saw the inaugural DDD East Anglia conference.  This is the latest addition to the DeveloperDeveloperDeveloper events that take place all over the UK and sometimes around the world!  I was there, this being only my second ever DDD event that I’d attended.

DDD East Anglia was set on the grounds of the extensive Cambridge University in a building called “The Hauser Forum”.  For those attendees like myself who were unfamiliar with the area or the university campus, it was a little tricky to find exactly where to go.  The DDDEA team did make a map available on the website, but the designated car park that we were to use was cordoned off when I arrived!  After some driving around the campus, and with the help of a fellow attendee who was in the same situation as myself, I managed to find another car park that could be used.  I must admit that better signposting for both the car parking and the actual Hauser Forum building would have helped tremendously here.  As you can imagine, Cambridge University is not a small place and it’s fairly easy to get lost amongst the myriad of buildings on campus.

The event itself had 3 parallel tracks of sessions, with 5 sessions throughout the day in each track.  As is often the case with most DDD Events, once the agenda was announced, I found myself in the difficult position of having to choose one particular session over another as there were a number of timeslots where multiple sessions that I’d really like to attend were running concurrently.  As annoying as this can sometimes be, it’s testament to quality and diversity of the sessions available at the various DDD events.  East Anglia’s DDD was no different.

As I’d arrived slightly late (due to the car parking shenanigans) I quickly signed in and went along to seminar room 1 where the introduction was taking place.  After a brief round of introductions, we were off and running with the first session.  There had been some last minute changes to the published agenda, so without knowing quite where I wanted to be, I didn’t have to move from Seminar Room 1 and found myself in Dave Sussman’s session entitled, “SignalR: Ready For Real-Time”.

Dave started out by talking about how SignalR is built upon the notions of persistent connections and hubs on the server-side and that hubs are built on top of persistent connections and simply offer a higher level of abstraction.  SignalR, at it’s most basic, is a server-side hub (or persistent connection) through which all messages and data flows and this hub then broadcasts that data back to each connected client.  Other than this, and for the client-side of the equation, Dave tells us that SignalR is effectively one big jQuery module!

Some of the complexities that SignalR wraps up and abstracts away from the developer is the requirement to determine the best communication protocol to use in a given situation.  SignalR uses web sockets as the default method of communication, if the client supports such a protocol.  Web Sockets are a relatively new protocol that provide true duplex communication between client and server.  This facilitates cool functionality such as server-side push to the client, however, if Web Sockets are not available SignalR will seamlessly “downgrade” the connection protocol to HTTP long-polling – which uses a standard HTTP connection that is kept alive for a long time in order to receive the response from the server.

Dave starts to show us a demo, which fails to work the first time.  Dave had planned this however, and proceeded to tell us about one of the most common problems in getting a simple SignalR demo application up and running: Adding the call to .MapHubs (a requirement to register the routes of the Hubs that have been defined on the server-side) after all of the other route registration has been done.  This causes SignalR to fail to generate some dynamic JavaScript code that is required for the client.  The resolution is simply to place the call to .MapHubs before any calls to the other MVC route registrations. 

Dave tells us that SignalR doesn’t have to be in the browser.  We can create many other types of application (Console apps, WinForms Apps etc.) that connect to the HubConnection object over HTTP and then we can create a proxy in the client application that can send and receive the required messages over HTTP to the SignalR hub on the server!  Also, although it’s seen as an ASP.NET addition, SignalR isn’t dependent upon the ASP.NET runtime.  You can use it, via JavaScript, in a simple standalone HTML page with just the inclusion of a few JavaScript files and a bit of your own JavaScript to create and interact with a jQuery $._hubConnection which allows sending and receiving messages and data to the SignalR server-side Hub.

Although SignalR’s most used and default function on the Hub is the ability to broadcast a received message back to all connected clients (as in the ubiquitous “chat” sample application), SignalR has the ability to have the server send messages to only one or more specific clients.  This is done by sending a given message to a specific client based upon that client’s unique ID, which the Hub keeps in an internal static list of connected clients.  Clients can also be “grouped” too if needed so that multiple clients (but not all connected clients) can receive a certain message.  There’s no inherent state in the server-side Hub.  Therefore, things like the server-side collection of connected clients is declared as static to ensure state is maintained.  The Hub class itself is re-instantiated with each message that needs processing.  Methods of the Hub class are Tasks, so clients can send a message to the server hub and be notified sometime later when the task is completed (for example if performing some long running operation such as database backup etc.)

We’re told that there’s no built-in persistence with SignalR, and we should be aware that messages can (and sometimes do!) get lost in transit.  SignalR can, however, be configured to run over a message bus (For example, Windows Azure Message Bus) and this can provide the persistence and improved guarantee of delivery of messages.

Finally, although the “classic” demo application for SignalR is that of a simple “chat” application with simple text being passed back and forth between client and server, SignalR is not restricted to sending text.  You can send any object!  The only requirement here is that the object be able to be serialized for communication over the wire.  The objects that can be passed across the wire are serialized, by default, with JSON (internally using Newtonsoft’s JSON2 library).

After a quick break, during which there was tea, coffee and some lovely Danish pastries available, I was back in the same Seminar Room 1 to attend Mark Rendle’s session entitled, “The densest, fastest-moving talk ever”. Mark’s session wasn’t really about any single topic, but consisted of Mark writing and developing a simple browser-based TODO List application that utilised quite a number of interesting technologies.  These were Mark’s own Simple.Web and Simple.Data, along with some TypeScript, AngularJS and Bootstrap!

As a result of the very nature of this session, which contained no slides, demos or monologue regarding a specific technology, it was incredibly difficult to take notes on this particular talk.  It was simply Mark, doing his developer thing on a big screen.  All code.  Of course, throughout the session, Mark would talk about what he was doing and give some background as to the what and the why.  As I’m personally unfamiliar with TypeScript and AngularJS it was at times difficult to follow along with what and why Mark was making the choices he did when choosing to utilise one or more of these technologies.  Mark’s usage of his own Simple.Web and Simple.Data frameworks were easier to understand, and although I’ve not used either of these frameworks before, they both looked incredibly useful and lightweight to allow you to get basic database reading & writing within a simple web-based application up and running quite quickly.

After 30 minutes of intense coding including what appeared to be an immense amount of set-up and configuration of AngularJS routing, Mark can show us his application which is displaying his TODO items (from his previously prepared SQL Server database) in a lovely Bootstrap-styled webpage.  We’re only reading data at the moment, with no persistence back to the DB, but Mark spends the next 30 minutes plugging that in and putting it all together (with even more insane AngularJS configuration!).  By the end of the session, we do indeed have a rudimentary TODO List application!

I must admit that I feel I would have got a lot more from this session if I already knew more about the frameworks that Mark was using, specifically AngularJS which appears to be a rather extensive framework that can do everything you’d want to do in client-side JavaScript/HTML when building a web application.  Nonetheless, it was fun and enjoyable to watch Mark pounding out code.  Also, Mark’s inimitable and very humorous style of delivery made this session a whirlwind of information but really fun to attend.

Another break followed after Mark’s session with more tea, coffee and a smorgasbord of chocolate-based snacks positioned conveniently on tables just outside of each seminar room (more on the food later!).  Once the break was over, it was time for the final session of the morning and before the lunch break.  This one was Rob Ashton’s “Outside-In Testing of MVC”.

Rob’s opening gambit in this session is to tell us that his talk isn’t really about testing MVC, but its about testing general web applications and will include some MVC within it.  A slight bait and switch, and clearly Rob’s style of humour.  He’s mostly a Ruby developer these days so there’s little wonder there’s only a small amount of MVC within the session!  That said, the general tone of the talk is to explore ways of testing web applications from the outermost layer – the User Interface – and how to achieve that in a way that‘s fast, scalable and non-brittle.  To that extent, it doesn’t really matter what language the web application under test is written in!

Rob talks about TDD and that often people trying to get started in TDD often get it wrong.  This is very similar to what Ian Cooper talked in his “TDD, where did it all go wrong?” talk that he’s given recently in a number of places.  I attended Ian’s talk at a recent local user group.  Rob says he doesn’t focus so much on “traditional” TDD, and that often having complete tests that start at the UI layer and test a discrete piece of functionality in a complete end-to-end way is very often the best of all testing worlds.  Of course, the real key to being able to do this is to keep those tests fast.

Rob says he’s specifically avoiding definitions in his talk.  He just wants to talk about what he does and how it really helps him when he does these very things.  To demonstrate, he starts with the example of starting a brand new project.  He tells us that if we’re working in brownfield application development, we may as well give up all hope!  :(

Rob says that we start with a test.  This is a BDD style test, and follows the standard “Given, When, Then” format.  Rob uses CoffeeScript to write his tests as its strict handling of white-space forces him to keep his tests short, to the point, and easily readable, but we can use any language we like for the tests, even C#.

Rob says he’s fairly dismissive of the current set of tools often used for running BDD tests, such as Cucumber.  He says it can add a lot of noise to the test script that’s really unnecessary and often causes the test script wording to become more detached and abstracted from what it is the test should actually be doing in relation to the application itself.  So we are asked the question, What do we need to run our test?” – Merely a web browser and a web server!

In order to keep the tests fast, we must use a “headless” browser.  These are implementations of browser functionality but without the actual UI and Chrome of a real web browser.  One such headless browser is PhantomJS.  Using such a tool allows us to run a test that hits our webpage, performs some behaviour – adds text to a textbox, clicks a button etc. – and verifies the result of those actions, all from the command line.  Rob is quick to suggest that we shouldn’t use PhantomJS directly as then our tests will be tightly coupled to the framework we’re running them within.  Rob suggests using WebDriver (part of the Selenium suite of web browser automation tools) in conjunction with PhantomJS as that will provide a level of abstraction, and thereby not coupling the tests tightly to the browser (or headless browser) being used.  This level of abstraction is what allows the actual test scripts themselves to be written in any language of our choosing.  It just needs to be a language that can communicate with the WebDriver API.

Rob then proceeds to show us a demo of running multiple UI tests in a console window.  These tests are loading a real webpage, interacting with that page – often involving submission of data to the server to be persisted in someway, and asserting that some action has happened as a result of that interaction.  They’re testing the complete end-to-end process.  The first thing to note is that these tests are fast, very fast!.  Rob is spitting out some simple diagnostic timings with each test, and each test is completing in approx. 500ms!

Rob goes on to suggest ways of ensuring that, when we write our tests, they’re not brittle and too closely tied to the specific ID’s or layout of the page elements within the page that we’re testing.  He mentions one of the best tools to come from the Ruby world, Capybara.  Rob says that there’s a .NET version of Capybara called Coypu although it’s not quite as feature-complete as Capybara.  Both of these tools aim to allow intelligent automation of the browser testing process and help make tests readable, robust, fast to write with less duplication and less tightly coupled to the UI.  They help to prevent brittle tests that are heavily bound to UI elements.  For example, the tools try multiple ways to fill in a “username” textbox when instructed by first looking for the specific ID, but then intelligently looking for a <label for=”username”> if the ID is not found and using the textbox associated with the label.  If that’s not found, the tool will then intelligently try to find a textbox that happens to be “near” to where some static text saying “Username” may be on the page.

Rob suggests not to bother with “fast” unit tests.  He says to make your UI tests faster!  You’ll run them more frequently, and a fast UI test means a fast UI.  If we achieve this, we’re not just testing the functionality, but by ensuring we have a suite of UI tests that run very fast, we will by virtue of that, have an actual application and UI that runs very fast.  This is a win-win situation!

Rob proceeds to build up a demo application that we can use to run some tests against.  He does this to show us that he’s not going to concern himself with databases or persistence at this point – he’s only storing things in an in-memory collection.  Decisions about persistence and storage should come at the very end of the development as by then, we’ll have a lot more information about what that persistence layer needs to be (i.e.. a document database, SQL Server for more complex queries etc.)  This helps to keep the UI tests fast!

Rob then proceeds to give us a few tips on MVC-specific development, and also about how to compose our unit tests when we have to step-down to that level.  He says that our Controllers should be very lightweight and that we shouldn’t bother testing them.  You’ve got UI tests that cover that anyway.  He states that, “If your controller has more than one IF statement, then it shouldn’t!”.  Controllers should be performing the minimal amount of work.  Rob says that if a certain part of the UI or page (say a register/signup form) has complex validation logic, we should test that validation in isolation in it’s own test(s).   Rob says that ActionFilters are bad.  They’re hard to test properly (usually needing horrible mocking of HTTPContext etc.) and they often hide complexity and business logic.  This logic is better placed in the model.  We should also endeavour have our unit-level tests not touch any part of the MVC framework.  If we do need to do that, have a helper method that abstracts that away and allows the test code to not directly touch MVC at all.

To close, Rob gives us the “key takeaways” from his talk:  Follow your nose, focus on the pain and keep the feedback loop fast.  Slides for Rob’s talk are available here.

Lunch

After Rob’s talk, it was time for lunch.  This was provided by the DDDEA team, and consisted of sandwiches, crisps, a drink and even more chocolate-based confectionary.  There was also the token gesture of a piece of fruit, I suppose to give the impression that there was some healthy items within there!

There was even the ability to sit outside of the main room in the Hauser Forum and eat lunch in an al-fresco style.  It was a beautiful day, so many attendees did just that.  The view from the tables on this balcony was lovely.

View

As is often the case at events such as these, there were a number of informal “grok” talks that took place over the lunchtime period.  These are usually 10 minutes talks from any member of the audience that cares to get up and talk about a subject that interests them or that they’re passionate about. 

Since I was so busy stuffing my face with the lovely lunch that was so kindly provided, I had missed the first of the grok talks.  I managed to miss most of the second grok talk, too, which was given by Dan Maharry about his experiences writing technical books.  As I only caught the very end of Dan’s talk, I saw only one slide upon which was the wise words, "Copy editors are good. Ghost-writers are bad."  Dan did conclude that whilst writing a technical manual can be very challenging at times it is worth it when, three months after completing your book, you receive a large package from the publishers with 20-30 copies of your book in there with your own name in print!

The last grok talk, which I did catch, was given by Richard Duttton on life as a Software Team Lead at Red Bull Racing.  Richard  spoke about the team itself and what kind of software they produce to help the Formula 1 team build a better and faster car.  Richard answered the question of “What’s it like to work in F1?”.  He said there’s long hours and high pressure but it’s great to travel and see how the software you write affects the cars and the race first hand.

Red Bull Racing development team is about 40 people strong.  About half of these have a MATLAB background rather than .NET/C#.  Richard’s main role is developing software for data distribution and analysis.  He writes software that is used on the race pit walls as well as being used back at HQ.  They can get a data reading from the car back to the IT systems in the HQ within 4 seconds from anywhere in the world!  The main data they capture is GPS data, Telemetry data and Timing data.  Within each of these categories of data, there can be 1000’s of individual data points that are captured.

Richard spoke about the team’s development methodology and said that they do “sort-of” agile, but not true agile.  It’s short sprints that align with the F1 race calendar.  There are debriefs after each race.  When 2 races are back-to-back on consecutive weekends, there’s only around 6 hours of development time between these two races!

The main software languages used are C#.NET 4.5 (VS2010 & VS2012 with TFS and centralized build system) but they mostly develop WPF applications (with some legacy WinForms stuff in there as well).  There’s also a lot of MATLAB.  They still have to support Windows XP as an OS as well as more modern platforms like Windows Phone 8.

After the lunch time and grok talk sessions were over, it was back to the scheduled agenda.  There were two more sessions left for the day, and my first talk of the afternoon was Ashic Mahtab’s “Why use DDD, CQRS and Event Sourcing?

Ashic starts by ensuring that everyone is familiar with the terminology of DDD, CQRS and Event Sourcing.  He gives us the 10 second elevator pitch of what each acronym/technology is to ensure we know.  He says that he’s not going to go into detail of what these things are, but rather why and when you should use them.  For the record, DDD is Domain-Driven Design, CQRS is Command Query Responsibility Segregation and is about having two different models for reading vs. writing of data, and Event Sourcing is about not writing a lot of changes to a database record in one go, but to write the different stages of changes to the record over time.  The current state of the record is then derived by combining all the changes (like delta differences).

He says that very often applications and systems are designed as “one big model”.  This usually doesn’t work out so well in the end.  Ashic talks about the traditional layered top-down N-Tier architecture and suggests that this is a bad model to follow these days.  Going through many layers makes no sense, and this is demonstrated especially well when looking at reading vs. writing data – something directly addressed by CQRS.  Having your code go through a layer that (for example) enforces referential integrity when only needing to read data rather than writing it is unnecessary as referential integrity can never be violated when reading data, only when data is being written.

Ashic continues his talk by discussing the notion of a ubiquitous language and that very often, a ubiquitous language isn’t ubiquitous.  Different people calls things by different names.  This is often manifested between disparate areas of an enterprise.  The business analysts may call something by one name, whilst the IT staff may call the same thing by a different name.    We need to use a ubiquitous language, but we also need to understand that it’s often only ubiquitous within a “bounded context”.  A bounded context is a ring-fenced area where a single “language” can be used by everyone within that area of the enterprise and is ubiquitous within that context.  This provides a “delimited applicability of a particular model, gives team members a clear and shared understanding of what has to be consistent and what can develop independently.”.

Ashic goes on to talk about the choices of the specific technology stack and how those choices can impact many areas of a project.  A single technology stack, such as the common WISA Microsoft-based stack for web applications (the Windows/Microsoft equivalent of the even more common LAMP stack) can often reduce IT expenditure within an enterprise, but those cost savings can be mitigated by the complexity of developing part of a complete system using a technology that’s not an ideal fit for the purpose.  An example may be using SQL Server to store documents or binary data, when a document–oriented database would be a much more appropriate solution.

Ashic tells a tale of his current client who only have one big model for their solution that comprises of around 238 individual projects in a single Visual Studio solution.  A simple feature change that was only 3 lines of code required the entire solution to be redeployed.  This in turn required testing/QA, compliance verification and other related disciplines to be re-performed across the entire solution even though only a tiny portion had actually changed.  The “one big model” had forced them into this situation, whereas multiple, separate models communicating between each other by passing messages in a service-oriented approach would have facilitated a much smaller footprint for deployment, and thus a smaller sized application that needed testing and verification.

Ashic tells us that event sourcing over a RESTful API is a good thing.  Although there’s the possibility of the client application dealing with slightly stale data, it’ll never be “wrong” data as the messages passed over this architecture are immutable.  Also, if you’re using event sourcing, there’s no need to concern yourself about auditing and logging, all individual changes are effectively audited anyway by virtue of the event sourcing mechanism!  Ashic advises caution when applying event sourcing and consideration should be given to where not to apply it.  If all you’re doing in a certain piece of functionality is retrieving a simple list of countries from a database table that perhaps contains only one or two columns, it’s overkill and will only cause you further headaches if applied.

He states that versioning of data is difficult in a relational database model.  You can achieve rudimentary versioning with a version column on your database tables, or a separate audit table, however this is rarely the best approach or delivers the best performance of design.  Event sourcing can significantly help in this regard, too, as rather than versioning an entire record (say a “customer” record which may consist of dozens of fields), you’re versioning a very small and specific amount of data (perhaps only a single field of the customer record).  The event sourcing message that communicates the change in the one field (or a small number of fields) effectively becomes the version itself as multiple changes to that field(s) will be sent over several different immutable messages.

The talk continues with an examination of many of the various tools and technologies that we use today. Dependency Injection, Object mapping (with Automapper for example) and Aspect-Oriented programming. Ashic ponders whether these things are really good practice.  Not so much the techniques that (for example) a dependency-injection container performs, but whether we need the container itself at all.  He says that before DI Containers came along, we simply used the factory pattern and wrote our own classes to perform the very same functionality.  Perhaps where such techniques can be written by ourselves, we should avoid leaning upon third-party libraries.  After all, dependency injection can often be accomplished in as little as 15 lines of code!  For something a little more complicated such as aspect-oriented programming, Ashic uses the decorator pattern instead.  It’s tried and trusted, and doesn’t re-write the IL of your compiled binaries – something which makes debugging very difficult - like many AoP frameworks do.

Ashic concludes his talk by restating the general theme.  Don’t use “one big model” to design your system.  Create bounded contexts, each with their own model and use a service-oriented architecture to pass messages between these model “islands”.  The major drawback to using this approach to the design of your system is that there’s a fair amount of modelling work to do upfront to ensure that you properly map the domain and can correctly break that down into multiple discreet models that make sense.

Cheese and Grapes

After Ashic’s talk, there was the first afternoon break.  Each of the three seminar rooms shuffled out into the main hall area to be greeted with a lovely surprise.  There was a table full of delicious local cheeses, pork pies, grapes and artisan bread, lovingly laid on by Rachel Hawley and the generous folks at Gibraltar software (Thanks guys! I think you can safely say that the spread went down very well with the attendees!)

So after we had all graciously stuffed our faces with the marvellous ploughman’s platter, and wet our whistles with more tea and coffee, it was time for the final session of the day.

The final session for me was Tomas Petricek’s “F# Domain-Specific Languages”.

Tomas starts out by mentioning that F# is a growing language with a growing community across the world - user groups, open source projects etc.  It’s also increasingly being used in a wide variety of companies across many different areas. Credit Suisse use F# for financial processing (perhaps the most obvious market for F#) but the language is also used by companies like Kaggle for machine learning and also companies like GameSys for developing the server-side components used within such software as Facebook games.

Tomas then demos a sample 3D Domain-specific language (or DSL) that composes multiple 3d cylinders, cones and blocks to composite ever more elaborate structures from the component parts.  He shows building a 3D “castle” structure using these parts that combines multiple functions from the domain-specific language, underpinned by F# functions.  He shows that the syntax of the DSL contains very little F# code, only requiring a small number of F# constructs when we come to combine the functions to create larger, more complex functions.

After this demo, Tomas moves on to show us how a DSL for European Call and Put stock options may look.  He explains what call and put options are (Call options are an agreement to buy something at a specific price in the future and Put options are an agreement to sell something at a specific price in the future) and he then shows some F# that wraps functions that model these two options.

Whilst writing this code, Tomas reminds us that everything in F# is statically typed.  Also that everything is immutable.  He talks about how we would proceed to define a domain-specific language for any domain that we may wish to model and create a language for.  He says that we should always start by examining the data that we’ll be working with in the domain.  It’s important to identify the primitives that we’ll be using.  In the case of Tomas’ stock option DSL, his primitives are the call and put options.  It’s from these two primitive functions that further, more complex functions can be created by simply combining these functions in certain ways.  The call and put functions calculate the possible gains and/or losses for each of the two options (call or put) based upon a specific current actual price. Tomas is then able to “pipeline” the data that is output from these functions into a “plot” function to generate a graph that allows us to visualize the data.  He then composes a new function which effectively “merges” the two existing functions before pipelining the result again to create another graph that shows the combined result set on a single graph.  From this we can visualize data points that represent the best possible price at which to either buy or sell our options.

Tomas tells us that F# is great for prototyping as you’re not constrained by any systems or frameworks, you can simply write your functions that accept simple primitive input data, process that data, then output the result.  Further functions are then simply  composed of those more basic functions, and this allows for very quick testing of a given hypotheses or theory.

For some sample F# code, Tomas models the domain first like so:

type Option =
| EuropeanPut of decimal
| EuropeanCall of decimal
| Combine of Option * Option

This is simply modelling the domain and the business language used within that domain.  The actual functionality to implement this business language is defined later.

Tomas then decides that the definition can actually be rewritten to something equivalent but slightly better like so:

type OptionKind = Put | Call

type Option =
| European of OptionKind * decimal
| Combine of Option * Option

He can then combine these two put/call options/functions like so:

let Strangle name lowPrice highPrice = 
    Combine
       (  European(Put, name, lowPrice),
          European(Call, name, highPrice)  )

Strangle is the name of a specific type of option in the real world of stock options and this option is a combination of call and put options that are combined in a very specific way.  The function called Strangle is now defined and is representative of the domain within which it is used.  This makes it a perfect part of the domain-specific language.

Tomas eventually moves onto briefly showing us a DSL for pattern detection.  He shows a plotted graph that can go up and down across the x axis and how we can used F#-defined DSL-specific functions to detect that movement up or down.  We start by defining the “primitives”.  That could be the amount of the movement (say, expressed in pixels or some other arbitrary unit we decide to use), and then a “classifier”.  The classifier tells us in which direction the movement is (i.e. up or down).  With these primitives defined, we can create functions that detect this movement based upon a certain amount of points that are plotted on our graph.  Although Tomas didn’t have time to write the code for this as we watched (we were fairly deep into the talk at this point with only a few minutes left), he showed the code he had prepared earlier running live on the monitor in front of us.  He showed how he could create multiple DSL functions, all derived from the primitives, that could determine trends of the movement of the plotted graph over time.  These included detection of:  Movement of the graph upwards, Movement of the graph downwards and even Movement of the graph in a specific bell curve-style (i.e. a small downwards movement, immediately followed by an upwards movement).  For each of these combined functions, Tomas was able to apply them to the graph in real-time, by simply “wiring up” the graph output - itself a DSL function, in this case a recursive one that simply returned itself with new data with every invocation – to the detection functions of the DSL.

At this point, Tomas was out of time, however what we had just seen was an incredibly impressive display of the expressiveness, the terseness, and the power of F# and how domain-specific languages created using F# can be both very feature rich and functional (pardon the pun!) with the minimum of code.

At this point, the conference was almost over.  We all left our final sessions and re-gathered in the main hall area to finish off the lovely cheese (amazingly there was still some left over from earlier on!) and wait whilst the conference organisers and Hauser Forum staff rearranged the seminar rooms into one big room in which the closing talk from the conference organisers would be given.

After a few minutes, we all shuffled inside the large room, and listened as the DDD organisers thanked the conference sponsors, the speakers, the university staff and finally the attendees for the making the conference the great success that it was. And it was indeed a splendid event.  There were then some prizes and various items of swag to be given away (I didn’t win anything :( ) but I’d had a fantastic day at a very well organised and well run event and I’d learned a lot, too.  Thanks to everyone involved in DDDEA, and I hope it’s just as good next year!