DDD Scotland 2018 In Review


Thursday 15 Feb 2018 at 21:00
Conferences  |  conferences development dotnet ddd

Image

This past Saturday 10th February 2018, the first of a new era of DDD events in Scotland took place.  This was DDD Scotland, held at the University of the West of Scotland in Paisley just west of Glasgow.

Previous DDD events in Scotland have been the DunDDD events in Dundee but I believe those particular events are no more.  A new team has now assembled at taken on the mantel of representing Scottish developers with a new event of the DDD calendar.

This was a long drive for me, so I set off on the Friday evening after work to travel north to Paisley.  After a long, but thankfully uneventful, journey I arrived at my accommodation for the evening.  I checked in and being quite tired from the drive, I crashed out in my room and quickly fell asleep after attempting to read for a while, despite it only being about 9pm!

The following morning I arose bright and early, gathered my things and headed off towards the University campus to hopefully get one of the limited parking spaces at the DDD Scotland event.  It was only a 10 minutes drive to the campus, so I arrived early and was lucky enough to get a parking space.  After parking the car, I grabbed my bag and headed towards the entrance, helpfully guided by a friendly gentleman who had parked next to me and happened to be one of the university's IT staff.

Image

Once inside, we quickly registered for the event by having our tickets scanned.  There were no name badges for this DDD events, which was unusual.  After registration it was time to head upstairs to the mezzanine area where tea, coffee and a rather fine selection pastries awaited the attendees.   DDD Scotland had 4 separate tracks of talks and the 4th track was largely dedicated to lightning talks and various other community-oriented talks and events taking place in a specific community room.  This was something different from other DDD events and was an interesting approach.

After a little while, further attendees arrived and soon the mezzanine level was very busy.  The time was fast approaching for the first session, so I made my way back downstairs and headed off the main lecture hall for my first session, Filip W.'s Interactive Development With Roslyn.

Image

Filip starts by saying that this will be a session about looking at ways of running C# interactively without requiring full programs or a full compile / build step.  He says that many current dynamic or scripting languages have this way of writing some code and quickly running it and that C# / .NET can learn a lot from that work flow.  We start by looking at DotNet Core, which has this ability available using the "Watcher".  We can run dotnet watch run from the command line and this will cause the compiler to re-compile any files that get modified and saved within the folder that is being watched.  As well as invoking the dotnet watch command with run, we can invoke it with the test parameter instead, dotnet watch test, which will cause dotnet core to run all unit tests within the watched folder.  This is effectively a poor man's continuous testing.  Filip shows us how we can exclude certain files from the watcher process by adding <ItemGroup> entries into the csproj file.

Next, Filip talks about "Edit & Continue". It was first announced for C# back in 2004, however, Edit & Continue frequently doesn't work and the VS IDE doesn't help to identify the things that are or are not supported by the Edit & Continue functionality. The introduction of Roslyn helped greatly with Edit & Continue amongst other things. For example, prior to Roslyn, you couldn't edit lambda expressions during edit & continue session, but with Roslyn you can.

Image

Visual Studio 2017 (version 15.3) has finally implemented Edit & Continue for C# 7.0 language features.  Filip shows some sample C# code that will load in a memory stream of already compiled code, perform some small changes to that code and then send the changed stream to the Roslyn compiler to re-compile on the fly!

From here, we move on to look at the C# REPL.  A REPL is a Read-Eval-Print-Loop.  This existed before C# introduced the Roslyn compiler platform, but it was somewhart clunky to use and had many limitations.  Since the introduction of Roslyn, C# does indeed now have a first class REPL as part of the platform which is built right into, and ships with, the Roslyn package itself, called "CSI".  CSI is the "official" C# REPL, but there's also scriptcs, OmniSharp, CS-REPL all of which are open source.

Filip says how Roslyn actually introduced a new "mode" in which C# can be executed, specifically to facilitate running individual lines of C# code from a REPL.  This allows you to (for example) declare and initialise a variable without requiring a class and a "sub Main" method to serve as the execution context.  Roslyn also supports expressions such as #r System.IO as a way of introducing references.  Filip also states how it's the only place where valid C# can be written that uses the await keyword without a corresponding async keyword.  We're told that C# REPL compilation works by "chaining" multiple compilations together. So we can declare a variable in one line, compile it, then use that variable on the next REPL loop which is compiled separately and "chained" to the previous compilation in order to reference it.  Within Visual Studio, we have the "C# Interactive Window" which is really just the CSI terminal, but with a nice WPF front-end on top of it, providing such niceties as syntax colouring and Intellisense.

Filip shows us some code that highlights the differences between valid and legal REPL code and "normal" C# code that exists as part of a complete C# program.  There's a few surprises in there, so it's worth understanding the differences.

Image

Filip goes on to talk about a product called Xamarin Workbooks.  This is an open source piece of software that fuses together documentation with interactive code.  It allows the writing of documentation files, usually written in a tutorial style, in Markdown format with the ability to embed some C# (or other language) code inside.  When the markdown file is rendered by the Xamarin Workbooks application, the included C# code can be compiled and executed from the application rendering the file.  It's this kind of functionality that allows many of the online sites that offer the ability to "try X" for different programming languages (e.g. Try F#, GoLang etc.)

After Filip's talk, it was time to head back to the mezzanine level for further tea and coffee refreshments as well as helping to finish off some of the delicious pastries that had amazingly been left over from the earlier morning breakfast.  After a quick refreshment, it was time for the next session which, for me, was in the same main hall that I'd previously been in and this one was Jonathan Channon's Writing Simpler ASP.NET Core.

Image

Jonathan started by talking about SOLID.  These are the principles of Single Responsibility, Open/Closed, Liskov Substitution, Interface Segregation and Dependency Inversion.  We've probably all used these principles in guiding the code that we write, but Jonathan asks if an adherence to SOLID is actually the best approach.  He shows us a code sample with a large number of parameters for the constructor.  Of course, all of the parameters are to inject the dependencies into the class:

public class FooService
{
    public FooService(ICustomerRepository customerRepository,
                      IBarRepository barRepository,
                      ICarRepository carRepository,
                      IUserRepository userRepository,
                      IPermissionsRepository permissionsRepository,
                      ITemplateRepository templateRepository,
                      IEndpointRepository endpointRepository)
                      {
                        // ...
                      }
}

Jonathan says how one of the codebases that he works with has many classes like this with many constructor parameters.  After profiling the application with JetBrains' dotTrace, it was found there were a number of performance issues with the major one being the use of reflection due to the IoC framework using extensive reflection in order to provide those dependencies to the class.

Jonathan proceeds with his talk and mentions that it'll be rather code heavy.  He's going to show us a sample application written in a fairly typical style using the SOLID principles and then "morph" that project through a series of refactorings into something that's perhaps a bit less SOLID, but perhaps more readable and easier to reason about.  He shows us some more code for a sample application and tells us that we can get this code for ourselves from his GitHub repository.  We see how the application is initially constructed with the usual set of interface parameters to the constructors of the MVC controller classes.  This can be vastly improved by the use of a mediator pattern which can be provided by such libraries as MediatR.  Using such a pattern means that the controller class only needs a single injection of an IMediator instance. Controller action methods simply create a command instance which is handed off to the handler class and thus removing that long list of dependencies of the controller class.  So we can turn code like this:

public class MyController
{
    private IFoo foo;
    private IBar bar;
    private IBaz baz;
    private ICar car;
    private IDoo doo;
    
    public MyController(IFoo fooDependency,
                        IBar barDependency,
                        IBaz bazDependency,
                        ICar carDependency,
                        IDoo dooDependency)
                        {
                            // ...
                        }
                        
    public IEnumerable<Foo> Get()
    {
        var foos = foo.GetFoos();
        return MapMyFoos(foos);
    }
    
    // Other methods that use the other dependencies.
}

Into code a bit more like this:

public class MyMediatedController
{
    private IMediator mediator;
    
    public MyMediatedController(IMediator mediator)
    {
        this.mediator = mediator;
    }

    public IEnumerable<Foo> Get()
    {
        var message = GetFoosMessage();
        return this.mediator.Send(message);
    }
    
    // Other methods that send messages to the same Mediator.
}


public class GetFoosMessageHandler : IRequestHandler<GetFoosMessage, IEnumerable<Foo>>
{
    public IEnumerable<Foo> Handle(GetFoosMessage message)
    {
        // Code to return a collection of Foos.
        // This may use the IFoo repository and the GetFoosMessage
        // can contain any other data that the task of Getting Foos
        // might require.
    }
}

The new MyMediatedController now has only one dependency and it's on the Mediator type.  This Mediator is responsible for sending "messages" in the shape of object instances to the class that "handles" that message.  It's the responsibility of that class to perform the task required (GetFoos in our example above), relieving the controller class of having to be loaded down with lots of different dependencies.  Instead the controller focuses on the the thing that controller is supposed to do and that is simply orchestrate the incoming request with the code that actually performs the task requested.  Of course, now we have a dependency on the MediatR framework, but we can remove this by "rolling our own" mediator pattern code, which is fairly simple to implement.  Jonathan mentions his own Botwin framework, which is a web framework that takes the routing techniques of the NancyFX framework and applies them directly on top of ASP.NET Core.  By using this in conjunction with a hand rolled mediator pattern, we can get code (and especially controller code) that has the same readability and succinctness and without the external dependencies (apart from Botwin, of course!).

Next, Jonathan talks about the idea of removing all dependency injection.  He cites an interesting blog post by Mike Hadlow that talks about how C# code can be made more "functional" by passing in class constructor dependencies into the individual methods that require that dependency.  From there, we can compose our functions and use such techniques as partial application to supply the method's dependency in advance, leaving us with a function that we can pass around and use with having to supply the dependency each time it's used, just the other data that the method will operate on.  So, instead of code like this:

public interface IFoo
{
    int DoThing(int a);
}

public class Foo : IFoo
{
    public IFoo fooDependency;
    
    public Foo(IFoo fooDepend)
    {
        fooDependency = fooDepend;
    }
    
    public int DoThing(int a)
    {
        // implementation of DoThing that makes use of the fooDependency
    }
}

We can instead write code like this:

public static int DoThing(IFoo fooDependency, int a)
{
    // some implementation that uses an IFoo.
}

var dependency = new FooDependency();  // Implements IFoo

// Closes over the dependency variable and provides a function that
// can be called by only passing the int required.
Func<int, int> DoThingWithoutDependency = x => DoThing(dependency, x);

Now, the dependency to the DoThing function is already composed by some code - this would perhaps be some single bootstrapper style class that sets up all the dependencies for the application in one central location - and the DoThingWithoutDependency function now represents the DoThing function that has had its dependency partially applied meaning that other code that needs to call the DoThing function calls DoThingWithDependency instead and no longer needs to supply the dependency.  Despite the use of a static method, this code remains highly testable as the DoThingWithoutDependency function can be re-defined within a unit test, similar to how we would currently use a mock implementation of our interface but without requiring a mocking framework.  Another dependency removed!

Jonathan round off his talk by asking if we should still be building applications using SOLID.  Well, ultimately, that's for us to decide.  SOLID has many good ideas behind it, but perhaps it's our current way of applying SOLID within our codebases that needs to be examined.  And as Jonathan has demonstrated for us, we can still have good, readable code without excessive dependency injections that still adheres to many of the SOLID principles of single responsibility, interface segregation etc.

After Jonathan's talk it was time for a short break.  The tea and coffee were dwindling fast, but there would be more for the breaks in the afternoon's sessions.  I'd had quite a fair bit of coffee by the point, so decided to find my next session.  The room was quite some way across the other side of the building, so I headed off to get myself ready for Robin Minto's Security In Cloud-Native.

Image

Robin starts by introducing himself and talking about his background.  He started many years ago when BBC Micros were in most classrooms of schools. He and his friend used to write software to "take over" the schools network of BBC machines and display messages and play sounds.  It was from there that both Robin and his school friend became interested in security.

We start by looking at some numbers around security breaches that have happened recently.  There's now currently over 10 billion data records that have been lost or stolen.  This number is always growing, especially nowadays as more and more of our lives are online and our personal information is stored in a database somewhere, so security of that information is more important now than it's ever been.  Robin then talks about "cloud-native" and asks what the definition is.  He says it's not simply "lift-and-shift" - the simple moving of virtual machines that were previously hosted on on-premise hardware but are now hosted on a "cloud" platform.  We look at the various factors stated in the 12 Factor App documentation that can help us get a clearer understanding of cloud native.  Cloud native is applications that are built for the cloud first.  They run in virtual machines, or more likely containers these days, and are resilient to failures and downtime, expect their boundaries to be exposed to attack and so security is a first class consideration when building each and every component of a cloud-native application.  Robin makes reference to a talk by Pivotal's San Newman at NDC London 2018 that succinctly defines cloud-native as application that make heavy use of DevOps, Continuous Delivery, Containers and Micro-Services.

We look at the biggest threats in cloud native, and these can be broadly expressed as Vulnerable Software, Leaked Secrets and Time.  To address the problems of vulnerable software, we must continually address defects, bugs and other issues within our own software.  Continuous repair of our software must be part of our daily software development tasks.  We also address this through continuous repaving.  This means tearing down virtual machines, containers and other infrastructure and rebuilding it.  This allows for operating systems and other infrastructure based software and configuration to be continually rebuilt preventing the ability for any potential malware to infect and remain dormant within our systems over time.

We can address the potential for secrets to be leaked by continually changing and rotating credentials and other secrets that our application relies on.  Good practices around handling and storing credentials and secrets should be part of the development team's processes to ensure such things as committing credentials into our source code repositories doesn't happen.  This is important not only for public repositories but also for private ones too.  What's private today, could be become public tomorrow.  There are many options now for using separate credential/secrets stores (for example, HashiCorp's Vault or IdentityServer) which ensures we keep sensitive secrets out of potentially publicly accessible places.  Robin tells us how 81% of breaches are based on stolen or leaked passwords and it's probably therefore preferable to prevent the user from selecting such insecure passwords in the first place by simply prohibiting their use.  The same applies to such data as Environment Variables.  They're potentially vulnerable stored on the specific server that might need them, so consider moving them from off the server and onto the network to increase security.

Time is the factor that runs through all of this.  If we change things over time, malware and other bad actors seeking to attack our system have a much more difficult time.  Change is hostile to malware, and through repeated use of a Repair, Repave and Rotate approach, we can greatly increase the security of our application.

Robin asks if, ultimately, we can trust the cloud.  There's many companies involved in the cloud these days, but we mostly hear about the "Tier 1" players.  Amazon, Microsoft and Google.  They're so big and can invest so much in their own cloud infrastructure that they're far more likely to have much better security than anything we could provide ourselves.  Robin gives us some pointers to places we can go to find tools and resources to help us secure our applications.  OWASP is a great general resource for all things security related.  OWASP have their own Zed Attack Proxy project, which is a software tool to help find vulnerabilities in your software.  There's also the Burp Suite which can also help in this regard.  There's also libraries such as Retire.js that can help to identify those external pieces of code that a long running code base accumulates and which can and should be upgraded over time as new vulnerabilities are discovered and subsequently fixed in newer versions.

Image

After Robin's talk it was time for lunch.  We all headed back towards the mezzanine upper floor in the main reception area to get our food.  As usual, the food was a brown bag of crisps, chocolate bar and a piece of fruit along with a choice of sandwich.  I was most impressed with the sandwich selection at DDD Scotland as there was a large number of options available for vegetarian and vegans (and meat eaters, too!).  Usually, there's perhaps only one non-meat option available, but here we had around 3 or 4 vegetarian options and a further 2 or 3 vegan options!  I chose my sandwich, which was from the vegan options, a lovely falafel, houmous and red pepper tapenade and picked up a brown bag and headed off to one of the tables downstairs to enjoy my lunch.

Image

I wasn't aware of any grok talks going on over the lunch period and this was possibly due to the separate track of community events that ran throughout the day and concurrently with the "main" talks.  I scanned my agenda printout and realised that we actually had a large number of sessions throughout the entire day.  We'd had 3 sessions in the morning and there was another 4 in the afternoon for a whopping total of 7 sessions in a single track.  This is far more than the usual 5 sessions available (3 before lunch and 2 afterwards) at most other DDD events and meant that each session was slightly shorter at 45 minutes long rather than 60.

After finishing my lunch, I popped outside for some fresh air and to take a brief look around the area of Paisley where we were located.  After a fairly dull and damp start to the day, the weather had now brightened up and, although still cold, it was now a pleasant day.  I wandered around briefly and took some pictures of local architecture before heading back to the university building for the afternoon's sessions.  The first session I'd selected was Gary Fleming's APIs On The Scale Of Decades.

Image

Gary starts with a quote.   He shares something that Chet Haase had originally said:

"API's are hard. They're basically ship now and regret later".

Gary tells us that today's API's aren't perfect, but they're based upon a continually evolving understanding of what constitutes a "good" API.  So, what makes a modern API "good"?  They need to be both machine readable and human readable.  They need to be changeable, testable and documented.

Gary talks about an interesting thing called "affordance".  The term was first coined by the psychologist James J. Gibson, and the Merriam-Webster dictionary defines affordance as:

the qualities or properties of an object that define its possible uses or make clear how it can or should be used

Affordance can be seen as "implied documentation".  When we see a door with a handle and perhaps a "Pull" sign, we know that we can use the handle to pull the door open.  The handle and spout on a teapot indicates that we would use the handle to hold the teapot and tilt it to pour the liquid out through the spout.  This is what's known as "perceived affordance". Gary mentions the floppy disk icon that's become ubiquitous as an icon to represent the "Save" action within many pieces of software.  The strange thing is that many users of that software, who implicitly understand that the disk icon means "Save", have never seen an actual floppy disk.

It turns out that affordance is incredibly important, not only in the design of every day things, but in the design of our APIs.  Roy Fielding was the first to talk about RESTful API's.  These are API's that conform to the REST way and are largely self-documenting.  These API's espouse affordance in their design, delivering not only the data the user requested for a given API request, but also further actions that the user can take based upon the data delivered.  This could be presenting the user with a single "page" of data from a large list and giving the user direct links to navigate to the previous and next pages of data within the list.

This is presenting Information and Controls.   Information + Controls = Better API.  Why is this the case?  Because action contextualises information which in turn contextualises actions.

We look at nouns and verbs and their usage and importance as part of affordance.  They're often discoverable via context and having domain knowledge can significantly help this discovery.  We look at change.  Gary mentions the philosophical puzzle, "The Ship Of Theseus" which asks if a ship that has all of it's component parts individually replaced over time is still the same ship.  There's no right or wrong answer to this, but it's an interesting thought experiment.  Gary also mentions something call biomimicry, which is where objects are modelled after a biological object to provide better attributes to the non-biological object.  Japanese Shinkansen trains (bullet trains) have their noses modelled after kingfishers to prevent sonic booms from the train when exiting tunnels.

Gary moves on to talk about testing.  We need lots of tests for our API's and it's by having extensive tests that allows us to change our API faster and easier.  This is important as API's should be built for change and change should be based upon consumer-driven contracts.  The things that people actually use and care about.  As part of that testing, we should use various techniques to ensure that expectations around the API are not based upon fixed structures.  For example, consumers shouldn't rely on the fact that your API may have a URL structure that looks similar to .../car/123.  The consumer should be using the affordance exposed from your API in order to navigate and consume you API.  To this end, you can use "fuzzers" to modify endpoints and parameters as well as data.  This breaks expectations and forces consumers to think about affordance.  Gary says that it's crucial for consumers to use domain knowledge to interact with your API, not fixed structures. It's for this reason that he dislikes JSON with Swagger etc. as an API delivery mechanism as it's too easy for consumers to become accustomed to the structure and grow to depend upon exactly that structure.  They, therefore, don't notice when you update the API - and thus the Swagger documentation - and their consumption breaks.

Finally, Gary mentions what he believes is the best delivery mechanism for an API.  One that provides for rich human and machine readable hyperlinks and metadata and exposes affordance within its syntax.  That mechanism is HTML5!  This is a controversial option and has many of the attendees of the session scratching their heads, but in thinking about it, there's a method to the madness here.  Gary says how HTML5 has it all - affordance, tags, links, semantic markup.  Gary says how a website such as GitHub IS, in effect, an API.  We may interact with it as a human, but it's still an API and we use hypermedia links to navigate from one piece of data to another page with alternative context on the same data i.e. when looking at the content of a file in a repository, there's links to show the history of that file, navigate to other files in the same repository etc.

After Gary's session was over it was time for another short break before the next session of the afternoon.  This one was to be Joe Stead's Learning Kotlin As A C# Developer.

Image

Joe states that Kotlin is a fairly new open source language that takes many of the best bits of existing languages of many different different disciplines (i.e. object-oriented, functional etc.) with the aim of creating one overall great language.  Kotlin is a language that is written targeting the JDK and runs on top of the JVM, so is in good company with other languages such as Java, Scala, Clojure and many others.  Kotlin is a statically typed and object oriented language that also includes many influences from more functional languages.

As well as compiling to JVM byte code, Kotlin can also compile to JavaScript, and who doesn't want that ability in a modern language?   The JavaScript compilation is experimental at the moment, but does mostly work.  Because it's built on top of the familiar Java toolchain, build tools for Kotlin are largely the same as Java - either Maven or Gradle.

Joe moves on to show us some Kotlin code.  He shows how a simple function can be reduced to a simplified version of the same function, similar to show C# has expression bodied statements, so this:

fun add(a : Int, b : Int) : Int
{
    return a+b
}

Can be expressed as:

fun add(a: Int, b : Int) = a + b

In the second example, the return type is inferred, and we can see how the language differs from C# with the data types expressed after the variable name rather than in front of it.  Also, semicolons are entirely optional.  The var keyword in Kotlin is very similar to var in C# meaning that variables declared as var must be initialised at time of declaration so that the variable is strongly typed.  Variables declared with var are still writable after initialisation, albeit with the same type.  Kotlin introduces another way of declaring and initialising variables with the val keyword.  val works similarly to var, strongly typing the variable to it's initialisation value type, but it also makes the variable read only after initialisation.  The use of val can be used within class, too, meaning that the parameters to a class's constructor can be declared with the val keyword and they're read only after construction of an instance of the class.  Kotlin also implicitly makes these parameters available as read only properties, thus code like the following is perfectly valid:

class Person (val FirstName : String, val LastName : String)

fun main(args: Array<String>) {
    var myPerson = Person("John", "Doe")
    println(myPerson.FirstName)
}

The classes parameters could also have been declared with var instead of val and Kotlin would provide us with both readable and writable properties.  Note also that Kotlin doesn't use the new keyword to instantiate classes, but simply accesses them as though it were a function.  Kotlin has the same class access modifiers as C#, so classes can be private, public, protected or internal.

Kotlin has a concept of a specific form of class known as a data class.  These are intended for use by classes whose purpose is to simply hold some data (aka a bag of properties).  For these cases, it's common to want to have some default methods available on those classes, such as equals(), hashCode() etc.  Kotlin's data classes provide this exact functionality without the need for you to explicitly implement these methods on each and every class.

There's a new feature that may be coming in a future version of the C# language that allows interfaces to have default implementations for methods, and Kotlin has this ability built in already:

fun main(args: Array<String>) {
    var myAnimal = Animal()
    myAnimal.speak()
}

interface Dog {
    fun speak() = println("Woof")
}

class Animal : Dog { }

One caveat to this is that if you have a class that implements multiple interfaces that expose the same method, you must be explicit about which interface's method you're calling.  This is done with code such as super<Dog>.speak() or super<Cat>.speak(), and is similar to how C# has explicit implementation of an interface.

Kotlin provides "smart casting", this means that we can use the "is" operator to determine if a variable's type is of a specific subclass:

fun main(args: Array<String>) {
    var myRect = Rectangle(34.34)
    var mySquare = Square(12.12)
    println(getValue(myRect))
    println(getValue(mySquare))
}

fun getValue(shape : Shape) : Double
{
    if(shape is Square)
    {
        return shape.edge
    }
    if(shape is Rectangle)
    {
        return shape.width
    }
    return 0.toDouble()
}

interface Shape {} 

class Square(val edge : Double) : Shape {}
class Rectangle(val width : Double) : Shape {}

This can be extended to perform pattern matching, so that we can re-write the getValue function thusly:

fun getValue(shape : Shape) : Double
{
    when (shape)
    {
        is Square -> return shape.edge
        is Rectangle -> return shape.width
    }
    return 0.toDouble()
}

Kotlin includes extension methods, similar to C#, however, the syntax is different and they can be simply applied by declaring a new method with the class that will contain the extension method used as part of the method's name, i.e. fun myClass.MyExtensionMethod(a : Int) : Int.  We also have lambda expressions in Kotlin and, again, these are similar to C#.  This includes omitting the parameter for lambda functions that take only a single parameter, i.e. ints.map { it * 2 }, as well as using LINQ-style expressive code, strings.filter { it.length == 5 }.sortedBy { it }.map { it.toUpperCase() }.  Kotlin lambdas also use a special "it" keyword that can refer to the current object, for example: var longestCityName = addresses.maxBy { it.city.length }.  Kotlin also has a concept of lambdas that can have "receivers" attached.  These are similar to extension methods that work against a specific type, but have the added ability that they can be stored in properties and passed around to other functions:

fun main(args: Array<String>) {
    println("123".represents(123))
    println(123.represents("123"))
}

// This is an extension method
fun String.represents(another: Int) = toIntOrNull() == another

// This is a Lambda with receiver
val represents: Int.(String) -> Boolean = {this == it.toIntOrNull()}

Joe tells us that there's a lot more to Kotlin and that he's really only able to scratch the surface of what Kotlin can do within his 45 minutes talk.  He provides us with a couple of books that he considers good reads if we wish to learn more about Kotlin, Kotlin In Action and Programming Kotlin.  And, of course, there's the excellent online documentation too.

After Joe's session, it was time for another refreshment fuelled break.  We made our way to the mezzanine level once again for tea, coffee and a nice selection of biscuits.  After a quick cup of coffee and some biscuits it was time for the next session in the packed afternoon schedule.  This would be the penultimate session of the day and was to be Kevin Smith's Building APIs with Azure Functions.

Image

Kevin tells us how Azure functions are serverless pieces of code that can operate in Microsoft's Azure cloud infrastructure.  They can be written in a multitude of different languages, but are largely tied to the Azure infrastructure and back-end that they run on.

Kevin looks at how our current API's are written.  Granular pieces of data are often grouped together into a JSON property that represents the complete object that we're returning and this allows us to add additional properties to the response payload such as HAL style hypermedia links.  This makes it amenable to being a self documenting API, and if this kind of response is returned from a single Azure Function call, we can make a disparate set of independent Azure Functions appear, to the user, to be a cohesive API.

Kevin shows us how Azure Functions can have HTTP triggers configured against them.  These allow Azure Functions, which are otherwise just simple programmatic functions, to be accessed and invoked via a HTTP request - it's this ability that allows us to build serverless API's with Azure Functions.  We look at an extension to Visual Studio that allows us to easily build Azure Functions, called "Visual Studio Tools For Azure Functions", funnily enough.  Kevin mentions that by using this extension, you can develop Azure Functions and both run and debug those functions on a local "emulator" of the actual Azure environment.  This means that it's easy to get your function right before you ever need to worry about actually deploying it to Azure.  This is a benefit that Azure Functions has over one of it's most popular competitors, AWS Lambda.  Another benefits of Azure Functions over AWS Lambda is that AWS Lambda requires you to use another AWS service, namely API Gateway, in order to expose a serverless function over HTTP.  This has an impact on cost as you're now paying for both the function itself and the API Gateway configuration to allow that function to be invoked via a HTTP request.  Azure Functions has no equivalent of AWS's API Gateway as it's not required and so you're simply paying for the function alone.

As well as this local development and debugging ability, we can deploy Azure Functions from Visual Studio to the cloud just as easily as we can any other code.  There's the usual "Publish" method which is part of Visual Studio's Build menu, but there's also a new "Zip Deploy" function that will simply create a zip archive of your code and push it to Azure.

Azure Functions have the ability to generate an OpenAPI set of documentation for them built right into the platform.  OpenAPI is the new name for Swagger.  It's as simple as enabling the OpenAPI integration within the Azure portal and all of the documentation is generated for you.  We also look at how Azure Functions can support Cross-Origin Resource Sharing via a simple additional HTTP header, so long as that 3rd party origin is configured within Azure itself.

There are many different authorisation options for Azure Functions.  There's a number of "Easy Auth" options which leverage other authentication that's available within Azure such as Azure Active Directory, but you can easily use HTTP Headers or URL Query String parameters for your own hand-rolled custom authentication solution.

Kevin shares some of the existing limitations with Azure Functions. It's currently quite difficult to debug azure functions that are running within Azure, also the "hosting" of the API is controlled for you by Azure's own infrastructure, so there's little scope for alterations there.  Another, quite frustrating, limitation is that due to Azure Functions being in continual development, it's not unheard of for Microsoft to roll out new versions which introduce breaking changes.  This has affected Kevin on a number of occasions, but he states that Microsoft are usually quite quick to fix any such issues.

After Kevin's session was over it was time for a short break before the final session of the day.  This one was to be Peter Shaw's TypeScript for the C# Developer.

Image

Peter starts by stating that his talk is about why otherwise "back-end" developers using C# should consider moving to "front-end" development.  More and more these days, we're seeing most web-based application code existing on the client-side, with the server-side (or back-end) being merely an API to support the functionality of the front-end code.  All of this front-end code is currently written in JavaScript.  Peter also mentions that many of today's "connected devices", such as home appliances, are also run on software and that this software is frequently written using JavaScript running on Node.js.

TypeScript makes front-end development work with JavaScript much more like C#.  TypeScript is a superset of JavaScript that is 100% compatible with JavaScript.  This means that any existing JavaScript code is, effectively, also TypeScript code.  This makes it incredibly easy to start migrating to TypeScript if you already have an investment in JavaScript.  TypeScript is an ECMAScript 6 transpiler and originally started as a way to provide strong typing to the very loosely typed JavaScript language.  Peter shows us how we can decorate our variables in TypeScript with type identifiers, allowing the TypeScript compiler to enforce type safety:

// person can only be a string.
var person : string;

// This is valid.
person = "John Doe";

// This will cause a compilation error.
person = [0,1,2];

TypeScript also includes union types, previously known as TypeGuards, which "relaxes" the strictness of the typing by allowing you to specify multiple different types that a variable can be:

// person can either be a string or an array of numbers.
var person : string | number[];

// This is valid, it's a string.
person = "John Doe";

// This is also valid, it's an array of numbers.
person = [0,1,2];

// This is invalid.
person = false;

Peter tells us how the TypeScript team are working hard to help us avoid the usage of null or undefined within our TypeScript code, but it's not quite a solved problem just yet.  The current best practice does advocate reducing the use and reliance upon null and undefined, however.

TypeScript has classes and classes can have constructors.  We're shown that they are defined with the keyword of constructor().  This is not really a TypeScript feature, but is in fact an underlying ECMAScript 6 feature.  Of course, constructor parameters can be strongly typed using the TypeScript typing.  Peter tells us how, in the past, TypeScript forced you to make an explicit call to super() from the constructor of a derived class, but this is no longer required.

TypeScript has modules.  These are largely equivalent to namespaces in C# and help to organise large TypeScript programs - another major aim of using TypeScript over JavaScript.  Peter shares one "gotcha" with module names in TypeScript and that is that, unlike namespace in C# which can be aliased to a shorter name, module names in TypeScript must be referred to by their full name.  This can get unwieldy if you have very long module, class and method names as you must explicitly reference and qualify calls to the method via the complete name.  TypeScript has support for generics.  They work similarly to C#'s generics and are also defined similarly:

function identity<T>(arg: T): T {
    return arg;
}

You can also define interfaces with generic types, then create a class that implements the interface and defines the specific type:

interface MyInterface<T>{
    myValue : T;
}

class myClass implements MyInterface<string>{
    myValue;
}

let myThing = new myClass();

// string.
console.log(typeof(myThing.myValue));

The current version of TypeScript will generate ES6 compatible JavaScript upon compilation, however, this can be modified so that ES5 compliant JavaScript is generated simply by setting a compiler flag on the TypeScript compiler.  Many of the newer features of ES6 are now implemented in Typescript such as Lambdas (aka Arrow Functions) and default parameter values.  To provide rich support for externally defined types, TypeScript makes use of definition files.  These are files that have a ".d.ts" extension and provide for rich type support as well as editing improvements (such as Intellisense for those editors that support it).  The canonical reference source for such definition files is the definitelytyped.org website, which currently contains well over 5000 files that provide definitions for the types contained in a large number of external JavaScript libraries and frameworks.  Peter tells us how TypeScript is even being adopted by other frameworks and mentions how modern Angular versions are actually written in TypeScript.

Image

After Peter's session was over it was time for a final quick break before the conference wrap-up and prize session would be taking place in the main hall.  Due to the fact that this particular DDD had had 7 sessions throughout the day, it ran a little later than other DDD's do, so it was approximately 5:30pm by the time Peter's session was finished.  Due to having had a long day, and a further 3.5+ hour drive facing me to return home that evening, I unfortunately wasn't able to stay around for the closing ceremony and decided to walk back to my car to start that long journey home.  I'd had a brilliant day at the inaugural DDD Scotland organised by the new team.  It was well worth the long journey there and back and here's hoping it will continue for many more years to come.