Stacked 2015 In Review

IMG_20151118_170749 On Wednesday 18th November 2015, the third Stacked event was held.  The Stacked events are community events primarily based around Windows development.  The events are free to attend and are organised by a collective group of folks from Mando Group and Microsoft UK with sponsorship from additional companies.  The last two Stacked events were held in Liverpool in 2013 and after a year off in 2014, Stacked returned in 2015 with an impressive line-up of speakers and talk and to a new venue at the Comedy Store at Deansgate Locks in Manchester.

Being in Manchester, it was only a short train ride for me to arrive bright and early on the morning of the conference.  Registration was taking place from 8:30am to 9:10am, and I’d arrived just around 9am.  After checking in and receiving my conference lanyard, I proceeded to the bar area where complimentary tea and coffee was on offer.  After only having time for a quick cup of coffee, we were called into the main area of the venue, which was the actual stage area of the comedy club, for the first session of the day.

The first session was Mike Taulty’s Windows 10 and the Universal Windows Platform for Modern Apps. Mike’s session was dedicated to showing us how simple it is to create applications on the Universal Windows Platform.  Mike first starts by defining the Universal Windows Platform (UWP).  The UWP is a way of writing applications using a combination of one of the .NET languages (C# or VB.NET) along with a specific “universal” version of the .NET runtime, known as .NET Core.  Mike explains that, as Windows 10 is available on so many devices and different categories of devices (PC’s, Laptops, Tablets, Phones and even tiny IoT devices such as Raspberry Pi’s!), the UWP sits “on top” of the different editions of Windows 10 and provides an abstraction layer allowing largely unified development on top of the UWP.  Obviously, not every “family” of devices share the same functionality, so API’s are grouped into “contracts”, with different “contracts” being available for different classes of device.

Building a UWP application is similar to how you might build a Windows WPF application.  We use XAML for the mark-up of the user interface, and C#/VB.NET for the code behind.  Similar to WPF applications, a UWP application has an app.xaml start-up class.  Mike decides he’s going to plunge straight into a demo to show us how to create a UWP application.  His demo application will be an application that connects, via Bluetooth, to a SpheroBall (which is a great toy – it’s a small motorized ball that can be “driven” wirelessly and has lights that can light up in various RGB colours).  Mike will show this same application running on a number of different devices.

IMG_20151118_091002 Mike first explains about the make-up and structure of a UWP application.  The files we’ll find inside a UWP app – such as assets, pictures, resources etc. - are separated by "device family" (i.e. PC, tablet, phone etc.) - so we’d have different versions of each image for each device family we're targeting.   Mike explains how UWP (really XAML)  applications can be "adaptive" – this is the same as a "responsive" web site.  Mike builds up his application using some pre-built snippets of code and fills in the blanks, showing us how using compiler directives we can have certain code only invoked if we’re running on a specific device.  Mike demos his app first on a laptop PC, then a Windows 10 phone and finally a Raspberry Pi.  Mike shows how we can deploy to, and control, the Raspberry Pi – which is running Windows 10 Core - by either remote PowerShell or alternatively, via a web UI built into Windows 10 Core on the device.

Mike says that when we’re building an app for IoT devices (such as the Raspberry Pi, Arduino etc.) we will often need a reference to an extension library that is specific to the Iot Core platform. This extension library, which is referenced from within our UWP project separately, allows access to additional types that wouldn't ordinarily exist within the UWP platform itself.  By checking such things as  Windows.Foundation.MetaData.ApiInformation.IsAPIContractPresent we write code that only targets and is only invoked on specific classes of device.

Mike then shows us his application running on the RaspBerry Pi, but being controlled via a BlueTooth connected XBox One controller.  After this Mike explains that Windows 10, on devices equipped with touch-sensitive screens, has built in handwriting and “ink” recognition, so the demo proceeds to show the SpheroBall being controlled by a stylus writing on the touch-sensitive screen of Mike’s laptop.  Finally, Mike talks about Windows 10’s built-in speech recognition and shows us how, with only a few extra lines of code, we cannot control the SpheroBall via voice commands!

In rounding up, Mike mentions a new part of Windows 10, which is an open connectivity technology allowing network discovery of APIs, called "AllJoyn".  It's an open, cross-platform technology and Mike says how there’s even light bulbs you can currently buy that will connect to your home network via the AllJoyn technology so you can control your home lighting via network commands!

After Mike’s session, we all left the theatre area and went back to the main bar area where there was more tea and coffee available for our refreshments.  After a short 15-20 minutes break, we headed back to the theatre area to take our seats for the next session, which was Jeff Burtoft’s Windows 10 Web Platform.

Jeff starts by talking about the history of Internet Explorer with its Trident engine and Strict & Quirks mode - two rendering engines to render either quirks (i.e. old style, IE specific) or strict mode (to be more standard compliant).  Jeff says how this was Ok in the past as lots of sites were written specifically for Internet Explorer, but these days, we're pretty much all standards compliant.  As a result, Microsoft decided to completely abandon the old Internet Explorer browser and gave birth to the fully standards compliant Edge browser.   Jeff then shows a slide from a study done by a website called which is all about the proliferation of different versions of Chromium-based browsers.  Chromium is used both by Google’s Chrome browser, but it’s also the basis for a lot of “stock” browsers that ship on smartphones.  Many of these browsers are rarely, if ever, updated.  Jeff states that some features of IE were actually implemented exactly to the HTML specification whilst other browser’s implementations weren't exactly compliant with the W3C specification.  These browsers are now far more common, but Jeff states how Microsoft, with the Edge browser, will render things "like other browsers" even if not quite to spec.  This creates a better parity between all possible browsers so that developing for web apps is more consistent across platforms.

Jeff shows a demo using the new Web Audio API and shows 3 different sound files being played on a web page and perfectly synchronised, each with their own volume controls.  Jeff then shows a demo of a FPS game in the browser and controlled by the XBox one controller.  The demo is using 3 major APIs for this.  WebGL, Web Audio API and XBOX Controller API and manages a very impressive 40-50 frames per second, even though Jeff’s laptop isn’t the fastest laptop and the demo is running entirely inside the browser.

Next, Jeff talks about how we can write a HTML/JavaScript app (ala Windows 8) that are HTML and JavaScript and can be "bundled" with the EdgeHTML.dll library (the rendering engine for Edge browser) and Chakra (the JavaScript engine of Edge browser).   Apps developed like this can be "packaged" and deployed to run just like a desktop application, or can be "hosted" by using a "WebView control" - this allows a web app on a phone to look and act almost exact like a native app.

Jeff then talks about a Microsoft developed, but open-source, JavaScript library called ManifoldJS.  This library is the simplest way to create hosted apps across platforms and devices.  It allows the hosted web app to be truly cross-platform across devices and browsers.  For example, packaging up your own HTML/JavaScript application using ManifoldJS would allow the same package to be deployed to the desktop, but also deployed to (for example) an Android-based smartphone where the same app would use the Cordova framework to provide native-like performance as well as allowing access to device specific features, such as GPS and other sensors etc.

Jeff demos packaging an application using ManifoldJS and creates a hosted web app, running as a "desktop" application on Windows 10, which has pulled down the HTML, CSS and JavaScript from a number of pages from the BBC Sport website including all assets (images etc.) and wrapped it up nicely into an application that runs in a desktop window and functions the same as the website itself.

Finally, Jeff also demos another hosted web app that uses his Microsoft Band and is responsive gesture controls to automate sending specific, pre-composed tweets whilst drinking beer!  :)

IMG_20151118_153356 After Jeff’s session, there was another break.   This time, we were treated to some nice biscuits to go with our tea and coffee!   After another 15 minutes it was time for the final session of the morning.  This one was slightly unusual as it had two presenters and was split into two halves.  The session was by Jonathan Seal & Mike Taulty and was Towards A More Personal Computing Experience.

Jonathan was first to the stage and started by saying that his idea behind making computing “more personal” is largely geared around how we interact with machines and devices.  He notes how interactions are, and have been until recently, very fixed – using a keyboard and mouse to control our computers has been the norm for many years.  Now, though, he says that we’re starting to open up new ways of interaction.  These are speech and gesture controls.  Jonathan then talks about something called the “technological teller”.  This is the phenomenon whereby man takes an old way of doing something and applies it to new technology.  He shows a slide which indicates that the very first motorcars used by the US Mail service were steered using a rudder-like device, extended to the front side of the vehicle but controlling the the rear wheels.  This was implemented as, at that time, we were used to “steering” something with a rudder as all we’d had before the car that needed steering was boats!  He explains how it was many years before the invention of the steering wheel and placing the steering controls closer to where the user would actually steer the vehicle.

Jonathan shows some videos of new gesture control mechanisms in new cars that are shortly coming onto the market.  He also shows a video of a person controlling a robotic ball (similar to the SpheroBall used earlier by Mike Taulty) by using advanced facial recognition, which not only detected faces, but could detect emotional expressions in order to control the robotic ball.  For example, with a “happy” expression; the ball would roll towards the user, whilst with a “sad” or “angry” expression, the ball would roll away from the user.

After these videos, Jonathan invites Mike Taulty to the stage to show some of the facial recognition in action.   Mike first talks about something called Windows Hello, which is an alternative mechanism of authentication rather than having to enter a password.  Windows Hello works primarily on facial recognition.

Mike proceeds to show a demo of some very simple code that targets the facial recognition SDK that exists within Windows 10 and which allows, using only a dozen or so lines of code, to get the rectangles around faces captured from the webcam.  Mike also shows that same image which can be sent to an online Microsoft Research project called Project Oxford, which further analyses the facial image and can detect all of the elements of the face (eyes, eyebrows, node, mouth etc.) as well as provide feedback on the detected expression shown on the face (i.e. Happy, sad, angry, confused etc.)  Using Project Oxford, you can, in real-time, not only detect things like emotion from facial expressions but can also detect the person’s heart rate from the detected facial data!

Mike says that the best detection requires a “depth camera”.  He has one attached to his laptop.  It’s an Intel RealSense camera which costs around £100.  Mike also shows usage of a Kinect camera to detect a full person with 25 points across all bodily limbs.  The Kinect camera can detect and track entire skeletal frame of the entire body.  From this, software can use not only facial expressions, but entire body gestures to control software.

Mike also shows an application that interacts with Cortana – Microsoft’s personal assistant.  Mike shows a demo of some simple software that he’s written that interacts with Cortana allowing Mike to prefix spoken commands with some specific words that allow Cortana to interact with Mike’s software so that specific logic can be performed.   Mike asks Cortana, "Picture Search - show me pictures of cats".  The “Picture Search” prefix is a specifically coded prefix which instructs Cortana to interact with Mike’s program.  From here pictures matching “Cats” are retrieved from the internet and displayed; however, using the facial and expression detection technology, Mike can narrow his search down to show only “happy cats”!

IMG_20151118_131429 After this session, it was lunchtime.  In previous years, lunchtime at the Stacked events was not catered and lunch was often acquired at a local sandwich shop.  However, this year, with the event being bigger and better, a lunch was provided.  And it was a lovely lunch, too!  Lunch at conferences such as these are usually “brown bag” affairs with a sandwich, crisps etc. however on this occasion, we were treated to a full plate of hot food!  There was a choice of 3 different curries, a vegetable curry and a mild and a spicy chicken curry.  All served with pilau rice, a naan bread along with dips, sides of salad and a poppadum!  After queueing for the food, I took a table downstairs where there was more room and enjoyed a very delicious lunch.

As I was anticipating having to provide me own lunch, I’d brought along some cheese sandwiches and a banana, but after the lovely curry for lunch, I decided the these would make a nice snack on my train ride home at the end of the day!

After our lunch-break, it was time for the first session of the afternoon and the penultimate session of the day.  This was Mary J. Foley’s Microsoft & Developers – Now & Next.

Mary starts by saying that she’s not technical.  She’s a technology journalist, but she’s been following Microsoft for nearly 30 years.  She says that, with Windows 10, she really wanted to talk about 10 things.  But the more she tried to come up with 10 things; she could only come up with 3.  She says that firstly, there's been 3 CEO's of Microsoft.  And today, there are 3 business units - there used to be many more – Windows Division, Applications & Services Division and Cloud & Enterprise Division.  Mary says that previous CEO’s of Microsoft have “bet” on numerous things, some of which have not worked out.  With the current CEO, Satya Nadella, Microsoft now has only 3 big bets.  These are: More personal computing, Productivity & Business Processes and the Intelligent Cloud.  There are also 3 platforms - Windows, Office 365, Cloud.

Mary takes the opportunity to call out some of the technologies and projects that Microsoft is currently working on.  She first mentions the “Microsoft Graph” which is a grand, unified API that allows access to all other API’s provided by Microsoft i.e. Office 365 API, Azure etc. Developers can use the Microsoft Graph to extend the functionality of Office 365 and its related applications, for example.

Mary mentions she loves codenames.   She says she found out about Project Kratos - which is new, as-yet-unannounced technology building on top of Office365 and Azure called "PowerApps".  Not much is known about Project Kratos as yet, however, it appears to be a loose set of micro-services allowing non-programmers to extend and enhance the functionality of Office365.  It sounds like a very interesting proposition for business power users.

Mary talks about the future for cloud, and something known as PaaS 2.0 (Platform as a Service) which is also called Azure Service Fabric.  This is essentially lots of pre-built micro-services that can be consumed by developers.  Mary then quickly discusses one of her favourite project codenames from the past, “Red Dog”.  She says it was the codename for what eventually became Azure.  She says the codename originally came from some of the team members who were aware of a local strip club called the “Pink Poodle”, and so “Red Dog” was born!

Next, Mary goes on to talk about Bing.  She says that Bing is not just a search engine but is actually a whole developer platform as there are quite a lot of Bing related API’s. Bing has been around for quite some time, however, as a developer platform, it never really took off.  Mary says that Microsoft is now giving the Bing platform another “push”.  She mentions Project Satori, which is an “entity engine” and allows Bing and newer technology such as Cortana to better understand the web and search (i.e. a distributed knowledge graph).

IMG_20151118_140514 Mary then proceeds to mention that Microsoft has a team known as the "deep tech team" within the Developer Division.  She says how their job is to go out to companies that may have difficult technology problems that require solutions and to help those companies solve the problems.  Interestingly, the team are free to solve those problems using non-Microsoft technology as well as Microsoft technologies – whatever is the best solution to the problem.  The team will even help companies who are already committed to non-Microsoft technologies (i.e. pure Linux “shops” or pure Apple shops).  She says they have a series of videos on YouTube and Channel 9 known as the “Decoded” series, and that these videos are well worth checking out.

Mary then talks about another project, codenamed “Red Stone”.  This is the codename for what is effectively Windows 11, but which will be released as a significant update to Windows 10 (similar to Threshold2, however Red Stone is predicted to be 2 or 3 updates on from Threshold2).  She also talks about a few rumours within Microsoft.  One is that Microsoft may produce a Surface Phone, whilst the other is that Microsoft, if Windows Phone doesn’t gain significantly more market share, may switch their mobile phone operating systems to Android!

Finally, Mary talks about another imminent new technology from Microsoft called “GigJam”.  It’s billed as “a blank canvas you can fill with information and actions from your business systems.”  Mary says it’s one of those technologies that’s very difficult to explain, but once you’ve seen and used it, it very impressive.  Another one to watch!

After Mary’s session, there was a final coffee break after which was the last session of the day.  This session was Martin Beeby’s My Little Edge Case And IoT.   Martin had created something called "Edge Case", which was built to help him solve one of his own business problems that he has as a developer evangelist.  He needed a unique and interesting "call to action" from the events that he attends.  Edge Case is a sort of arcade cabinet sized device that allows users to enter a URL that would be sent to Microsoft’s SiteScan website in order to test the rendering of that URL.  The device is a steampunk style machine complete with old fashioned typewriter keyboard for input, old pixelated LCD displays and valve based lightbulbs for output and even a smoke machine! 

Martin outsourced the building of the machine to a specialist company.  He mentions the company name and their Italian domain, we which raises a few laughs in the audience.  Martin talks about how, after the full machine was built, he wanted to create a "micro" edge case, essentially a miniaturized version of the real thing, running on a single Raspberry Pi and made such that it could fit inside an old orange juice carton!.  Martin mentions that he’s placed the code for his small IoT (Internet of things) device on Github.

Martin demos the final micro edge case on stage.  Firstly, he asks the audience to send an SMS message using their phones to a specific phone number which he puts up on the big screen.  He asks that the SMS message simply contain a URL in the text.  Next, Martin uses his mini device to connect to the internet and access an API provided by Twilio in order to retrieve the SMS messages, one at a time, previously sent by the audience members.  The little device takes each URL and displays it to a small LCD screen built into the front of the micro edge case.  Martin reads out those URL’s and after a slight delay whilst the device sends the URL to the SiteScan service, Martin finally tells us how those URL’s have been rated by SiteScan, again, displayed on the small LCD screen of the micro edge case.

After Martin’s session was over, we were at the end of the day.  There was a further session later in the evening whereby Mary J. Foley was recording her Windows Weekly podcast live from the event; however, I had to leave to catch my train back home.  Stacked 2015 was another great event in the IT conference calendar, and here’s hoping the event will return again in 2016!

DDD North 2015 In Review


On Saturday 24th October 2015, DDD North held its 5th annual Developer Developer Developer event.  This time the event was held in the North-East, at the University of Sunderland.

As is customary for me now, I had arrived the evening before the event and stayed with family in nearby Newcastle-Upon-Tyne.  This allowed me to get to the University of Sunderland bright and early for registration on the morning of the event.

IMG_20151024_083559 After checking in and receiving my badge, I proceeded to the most important area of the communal reception area, the tea and coffee urns!  After grabbing a cup of coffee and waiting patiently whilst further attendees arrived, there was soon a shout that breakfast was ready.  Once again, DDD North and the University of Sunderland provided us all with a lovely breakfast baguette, with a choice of either bacon or sausage.  

After enjoying my bacon baguette and washing it down with a second cup of coffee, it was soon time for the first session of the day. The first session slot was a tricky one, as all of the five tracks of sessions appealed to me, however, I could only attend one session, so decided somewhat at the last minute it would be Rik Hepworth’s The ART of Modern Azure Deployments.

IMG_20151024_093119 The main thrust of Rik’s session is to explain Azure Resource Templates (ART).  Rik says he’s going to explain the What, the Why and the How of ART’s.  Rik first reminds us that every resource in Azure (from virtual networks, to storage accounts, to complete virtual machines) is handled by the Azure Resource Manager.  This manager can be used and made to perform the creation of resources in an ad-hoc manner using numerous fairly arcane PowerShell commands, however, for repeatability in creating entire environments of Azure resources, we need Azure Resource Templates.

Rik first explains the What of ART’s.  They’re quite simply a JSON format document that conforms to the required ART schema.  They can be split into multiple files, one which supplies the “questions” (i.e. the template of the required resource – say a virtual network) and the other file can supply the “answers” to fill-in-the-blanks of the question file. (i.e. the parameterized IP address range of the required virtual network).  They are idempotent too, which means that the templates can be run against the Azure Resource Manager multiple times without fear of creating more resources than are required or destroying resources that already exist.

Rik proceeds with the Why of ART’s.   Well, firstly since they’re just JSON documents and text files, they can be version controlled.  This fits in very nicely with the “DevOps” culture of “configuration as code”, managed and controlled in the same way as our application source code is.  And being JSON documents, they’re much easier to write, use and maintain than large and cumbersome PowerShell scripts composed of many PowerShell commands with difficult to remember parameters.  Furthermore, Rik tells us that, eventually, Azure Resource Templates will be the only way to manage and configure complete environments of resources within Azure in the future.

Finally, we talk about the How of ART’s.  Well, they can be composed with Visual Studio 2013/2015. The only other tooling required is the Azure SDK and PowerShell.  Rik does mentions some caveats here as the Azure Resource API – against which the ART’s run – is currently moving and changing at a very fast pace.  As a result of this, there’s frequent updates to both the Azure SDK and the version of PowerShell needed to support the latest Azure Resource API version.  It’s important to ensure you keep this tooling up-to-date and in sync in order to have it all work correctly.

Rik goes on to talk about how monitoring of running the resource templates has improved vastly.  We can now monitor the progress of a running (or previously run) template file from and, which is the Resource Manager in the Azure portal.  This shows the complete JSON of the final templates, which may have consisted of a number of “question” and “answer” files that are subsequently merged together to form the final file of configuration data.  From here, we can also inspect each of the individual various resources that have been created as part of running the template, for example, virtual machines etc.

Rik then mentions something called DSC.  This is Desired State Configuration.   This is now an engineering requirement for all MS products that will be cloud-based.  DSC effectively means that the “product” can be entirely configured by declarative things such as scripts, command line commands and parameters. etc.  Everything can be set and configured from here without needing to resort to any GUI.

IMG_20151024_095414 Rik talks about how to start creating your own templates.  He says the best place to start is probably the Azure Quickstart Templates that are available from a GitHub repository.  They contain both very simple templates to ease you into getting started with something simple, but also contain some quite complex templates which will help you should you need to create a template to deploy a complete environment of numerous resources.  Rik also mentions that next year will see the release of something called the “Azure Stack” which will make it even easier to create scripts and templates that will automate the creation and management of your entire IT infrastructure, both in the cloud and on-premise, too.

As well as supporting basic parameterization of values within an Azure Resource Template, you can also define entire sections of JSON that define a complete resource (i.e. an entire virtual machine complete with an instance of SQL Server running on it).  This JSON document can then be referenced from within other ART files, allowing individual resources to be scripted once and reused many times.  As part of this, Azure resources support many different types of extensions for extending state configuration into other products.  For example, there is an extension that allows an Azure VM to be created with an Octopus Deploy tentacle pre-installed, as well as an extension that allows a Chef client to be pre-installed on the VM, for example.

Rik shows us a sample layout of a basic Azure Resource Template project within Visual Studio.  It consists of 3 folders, Scripts, Templates and Tools.  There's a blank template in the template folder and this defines the basic "shape" of the template document.  To get started within a simple template for example, a Windows VM needs a Storage account (which can be an existing one, or can create new) and a Virtual Network before the VM can be created.

We can use the GUI tooling within Visual Studio to create the basic JSON document with the correct properties, but can then manually tweak the values required in order to script our resource creation.  This is often the best way to get started.  Once we’ve used the GUI tooling to generate the basics of the template, we can then remove duplication by "collapsing" lots of the properties and extracting them into separate files to be included within the main template script.  (i.e. deploy location is repeated for each and every VM.  If we’re deploying multiple VMs, we can remove this duplication by extracting into a separate file that is referenced by each VM).

One thing to remember when running and deploying ART’s, Rik warns us, is that the default lifetime of an Azure Access Token is only 1 hour.  Azure Access Tokens are required by the template in order to prove authorisation for creating Azure resources.  However, in the event that the ART is deploying a complete environment consisting of numerous resources, this can be a time-consuming process – often taking a few hours.  For this reason, it’s best to extend the lifetime of the Azure Access Tokens, at least during development of the templates, otherwise the tokens will expire during the running of the template, thereby making the resource creation fail.

Rik wraps us with a summary, and opens the floor to any questions.  One question that is posed is whether existing Azure Resources can be “reverse-engineered” to ART scripts.  Rik states that so long the existing resources are v2 resources (that have been created with Azure Resource Manager) then you can turn these resources into templates BUT, if existing resources are V1 (also known as Classic resources and created using the older Azure Service Management) they can't be reverse-engineered into templates.

IMG_20151024_105058 After a short coffee break back in the main communal area, it’s time for the second session of the day.  For this session, I decided to go with Gary Short’s Deep Dive into Deep Learning.

Gary’s session was all about the field of data science and of things like neural networks and deep learning.   Gary starts by asking who knows what Neural Networks are and asks what Deep Learning is and the difference between them.  Not very many people know the difference here, but Gary assures us that by the end of his talk, we all will.

Gary tells us that his talk’s agenda is about looking at Neural Networks, being the first real mechanism by which “deep learning” was first implemented, but how today’s “deep learning” has improved on the early Neural Networks.  We first learn that the phrase “deep learning” is itself far too broad to really mean anything!

So, what is a Neural Network?  It’s a “thing” in data science.  It’s a statistical learning model and can be used to estimate functions that can depend on a large number of inputs.  Well, that’s a rather dry explanation so Gary gives us an example.  The correlation between temperature over the summer months and ice cream sales over the summer months.  We could use a Neural Network to predict the ice cream sales based upon the temperature variance.  This is, of course, a very simplistic example and we can simply guess ourselves that as the temperature rises, ice cream sales would predictably rise too.  It’s a simplistic example as there’s exactly one input and exactly one output, so it’s very easy for us to reason about the outcome with really relying upon a Neural Network.  However, in a more realistic example using “big data”, we’d likely have hundreds if not many thousands of inputs for which we wish to find a meaningful output.

Gary proceeds to explain that a Neural Network is really a weighted directed graph.  This is a graph of nodes and the connections between those nodes.  The connections are in a specific direction, from one node to another, and that same node can have a connection back to the originating node.  Each connection has a “weight” or a probability.  In the diagram to the left we can see that node A has a connection to node E and also a separate connection to node F.  The “weight” of the connection to node F is 0.9 whilst the weight of the connection to node E is 0.1.  This means there a 10% chance that a message or data coming from node A will be directed to node E and a 90% chance that a message coming from node A will be directed to node F.  The overall combination of nodes and connections between the nodes overall gives us the Neural Network.

Gary tells us how Neural Networks are not new, they were invented in 1943 by two mathematicians, Warren McCulloch and Walter Pitts.  Back then, they weren’t referred to as Neural Networks, but were known as “Threshold Logic”.  Later on, in the late 1940's, Donald Hebb created a hypothesis based on "neural plasticity" which is the ability of a Neural Network to be able to “heal itself” around “injuries” or bad connections between nodes.  This is now known as Hebbian Learning.  In 1958, mathematicians Farley and Wesley A. Clark used a calculator to simulate a Hebbian Machine at MIT (Massachusetts Institute of Technology).

So, just how did today’s “Deep Learning” improve upon Neural Networks.  Well, Neural Networks originally had two key limitations.  Firstly, they couldn't process exclusive or (XOR) logic in a single-layer network, and secondly, computers (or rather calculators) simply weren't really powerful enough to perform the extensive processing required.  Eventually, in 1975, a mathematician named Werbos discovered something called “back propagation”, which is the backwards propagation of error states allowing originating nodes to learn of errors further down a processing chain and perform corrective measures (self-learning) to mitigate further errors.  Back propagation helped to solve the XOR problem.  It was only through the passage of a large amount of time, though, that yesterday’s calculators became today’s computers – which got ever more powerful with every passing year – and allowed Neural Networks to come into their own.  So, although people in academia were teaching the concepts of Neural Networks, they weren’t really being used, preferring instead alternative learning mechanisms like “Support Vector Machines” (SVM) which could work with the level of computing that was available at that time.  With the advent of more powerful computers, however, Neural Networks really started to take off after the year 2000.

So, as Neural Networks started to get used, another limitation was found with them.  It took a long time to “train” the model with the input data.  Gary tells us of a Neural Network in the USA, used by the USPS (United States Postal Service) that was designed to help recognise hand-written zip codes.  Whilst this model was effective at it’s job, it took 3 full days to train the model with input data!  This had to be repeated continually as new “styles” of hand-writing needed to be recognised by the Neural Network.

Gary continues by telling us that by the year 2006, the phrase “deep learning” had started to take off, and this arose out of the work of two mathematicians called Geoffrey Hinton and Ruslan Salakhutdinov and showed that many-layered, feed-forward Neural Networks could be trained far more effectively, thus reducing the time required to train the network.  So really, “deep learning” is really just modern day Neural Networks, but ones that have been vastly improved over the original inventions of the 1940’s. 

Gary talks about generative models and stochastic models.   Generative models will “generate” things in a random way, whilst stochastic model will generate things in an unpredictable way. Very often this is the very same thing.  It’s this random unpredictability that exists in the problem of voice recognition.  This is now a largely “solved” problem from around 2010.  It’s given rise to Apple’s Siri, Google’s Google Now and most recently, and apparently most advanced, Microsoft’s Cortana.

At this point, Gary shows us a demo of some code that will categorise Iris plants based upon a diverse dataset of a number of different criteria.  The demo is implemented using the F# language, however, Gary states that the "go to" language for Data Science is R.  Gary says that whilst it’s powerful, it not a very nice language and this is primarily put down to the fact that whilst languages like C, C#, F# etc. are designed by computer scientists, R is designed by mathematicians.  Gary’s demo can use F# as it has a “type provider” mechanism which allows it to “wrap” and therefore largely abstract away the R language.  This can be downloaded from NuGet and you’ll also need the FsLab NuGet package.

IMG_20151024_113640 Gary explains that the categorisation of Irises is the canonical example for data science categorisation.  He shows the raw data and how the untrained system initially thinks that there are three classifications of irises when we know there's only really two.  Gary then explains that, in order to train our Neural Network to better understand our data, we need to start by "predicting the past".  This is simply what is says, for example, by looking at the past results of (say) football matches, we can use that data to to help predict future results.

Gary continues and shows how after "predicting the past" and using the resulting data to then train the Neural Network, we can once again examine the original data.  The graph this time is correctly showing only two different categorisations of irises.  Looking closer at the results and we can see that of a data set that contains numerous metrics for 45 different iris plants, our Neural Network was able to correctly classify 43 out of the 45 irises, with only two failures.  Looking into the specific failures, we see that they were unable to be classified due to their data being very close between the two different classifications.  Gary says how we could probably “fine tune” our Neural Network by looking further info the data and could well eradicate the two classification failures.

IMG_20151024_104709 After Gary’s session, it’s time for another tea and coffee break in the communal area, after which it’s time for the 3rd and final session before lunch.  There had been a couple of last-minute cancellations of a couple of sessions due to speaker ill health, and one of those sessions was unfortunately the one I had wanted to attend in this particular time slot, Stephen Turner’s “Be Reactive, Think Reactive”.  This session was rescheduled with Robert Hogg delivering a presentation on Enterprise IoT (Internet of Things), however, the session I decided to attend, was Peter Shaw’s Microservice Architecture, What It Is, Why It Matters And How To Implement It In .NET.

Peter starts his presentation with a look at the talk’s agenda.   He’s going to define what Microservices are and their benefits and drawbacks.  He’ll explain how, within the .NET world, OWIN and Katana help us with building Microservices, and finally he is going to show a demo of some code that uses OWIN running on top of IIS7 to implement a Microservice.

IMG_20151024_120837 Firstly, Peter tells us that Microservices are not a software design pattern, they’re an architectural pattern.  They represent a 100-foot view of your entire application, not the 10-foot view, and moreover, Microservices provide a set of guidelines for deployment of your project.

Peter then talks about monolithic codebases and how we scale them by duplicating entire systems.  This can be wasteful when you only need to scale up one particular module as you’ll end up duplicating far more than you need.  Microservices is about being able to scale only what you need, but you need to find the right balance of how much to break down the application into it’s constituent modules or discreet chunks of functionality.  Break it down too much and  you'll get nano-services – a common anti-pattern - and will then have far too much complexity in managing too many small parts.  Break it down too little, and you’re not left with Microservices.  You’ve still got a largely monolithic application, albeit a slightly smaller one.

Next, Peter talks about how Microservices communicate to each other.  He states how there’s two schools of thought to approaching the communication problem.  One school of thought is to use an ESB (Enterprise Service Bus).  The benefits of using an ESB are that it’s a robust communications channel for all of the Microservices, however, a drawback is that it’s also a single point of failure.  The other school of thought is to use simple RESTful/HTTP communications directly between the various Microservices.  This removes the single point of failure, but does add the overhead of requiring the ability of each service to be able to “discover” other services (their availability and location for example) on the network.  This usually involves an additional tool, something like Consul, for example.

Some of the benefits of adopting a Microservices architecture are that software development teams can be formed around each individual service.  These would be full teams with developers, project managers etc. rather than having specific technical silos within one large team.  Other benefits are that applications become far more flexible and modular and can be composed and changed easily by simply swopping out one Microservice for another.

Some of the drawbacks of Microservices are that they have a potentially higher maintenance cost as your application will often be deployed across different and more expansive platforms/servers.  Other drawbacks are the potential for “data islands” to form.  This is where your application’s data becomes disjointed and more distributed due to the nature of the architecture.  Furthermore, Microservices, if they are to be successful, will require extensive monitoring.  Monitoring of every available metric of the applications and the communications between them is essential to enable effective support of the application as a whole.

After this, Peter moves on to show us some demo code, built using OWIN and NancyFX.  OWIN is the Open Web Interface for .NET and is an open framework for decoupling .NET web applications from the underlying web server that powers the application.  Peter tells us that Microsoft’s own implementation of the OWIN standard is called KatanaNancyFX is a lightweight web framework for .NET, and is built on top of the OWIN standard, thus decoupling the Nancy code from the underlying web server (i.e. there’s no direct references to HttpContext or other such objects).

Peter shows us how simple some of Nancy’s code can be:

public dynamic Something(){
    var result = GetSomeData();
    return result==null ? 404 : Result.AsJson();    

The last line of the code is most interesting.   Since the method returns a dynamic type, returning an integer that has the same value as a HTTP Status Code will be inferred by the Nancy framework to actually return that status code from the method!

Peter shows us some more code, most of which is very simple and tells us that the complete demo example is available to download from GitHub.

IMG_20151024_130929 After Peter’s talk wrapped up, it was time for lunch.  Lunch at the DDD events is usually a “brown bag” affair with a sandwich, crisps, some fruit and/or chocolate.  The catering at DDD North, and especially at the University of Sunderland, is always excellent and this year was no exception.   There was a large selection of various combinations of crisp flavours, chocolate bars and fruit along with a large selection of very nice sandwiches, including some of the more “basic” sandwich fillings for fusspots like me!  I don’t like mayonnaise, so pre-packed sandwiches are usually a tricky proposition.  This year, though, they had “plain” cheese and ham sandwiches with no additional condiments, which was excellent for me.

The excellent food was accompanied by a drink. I opted for water.  After collecting my lunch, I went off to find somewhere to sit and eat as well as somewhere that would be fairly close to a power point as I needed to charge my laptop.

IMG_20151024_131246 I duly found a spot that allowed me to both eat my lunch, charge my laptop and look out of the window onto the River Wear at what was a very nice day outside in sunny Sunderland!

IMG_20151024_131609 After fairly quickly eating my lunch, it was time for some lunchtime Grok Talks.  These are the 15-minute, usually fairly informal talks that often take place over lunch hour at many of these type of conferences and especially at DDD conferences.  During the last few DDD’s that I’d attended, I’d missed most of the Grok Talks for various reasons, but today, having already consumed my delicious lunch, I decided that I’d try to take them in.

By the time I’d reached the auditorium for the Grok Talks, I’d missed the first few minutes of the talk by Jeff Johnson all about Microsoft Azure and the role of Cloud Solution Architect at Microsoft.

Jeff first describes what Azure is, and explains that it’s Microsoft’s cloud platform offering numerous services and resources to individuals and companies of all sizes to be able to host their entire IT infrastructure – should they so choose – in the cloud. 

IMG_20151024_133830 Next, Jeff shows us some impressive statistics on how Azure has grown in only a few short years.  He says that the biggest problem that Microsoft faces with Azure right now is that they simply can’t scale their infrastructure quick enough to keep up with demand.  And it’s a demand that is continuing to grow at a very fast rate.  He says that Microsoft’s budget on expenditure for expanding and growing Azure is around 5-6 billion dollars per annum, and that Azure has a very large number of users even today.

Jeff proceeds by talking about the role of Cloud Solutions Architect within Microsoft.  He explains that the role involves working very closely with customers, or more accurately potential customers to help find projects within the customers’ inventory that can be migrated to the cloud for either increased scalability, general improvement of the application, or to make the application more cost effective.  Customers are not charged for the services of a Cloud Solutions Architect, and the Cloud Solutions Architects themselves seek out and identify potential customers to see if they can be brought onboard with Azure.

Finally, Jeff talks about life at Microsoft.  He states how Microsoft in the UK has a number of “hubs”, one each in Edinburgh, Manchester and London, but that Microsoft UK employees can live anywhere.  They’ll use the “hub” only occasionally, and will often work remotely, either from home or from a customer’s site.

After Jeff’s talk, we had Peter Bull and his In The Groove talk all about developing for Microsoft’s Groove Music.  Peter explains that Groove Music is Microsoft’s equivalent to Apple’s iTunes and Google’s Google Play Music and was formerly called Xbox Music.  Peter states that Groove Music is very amenable to allowing developers to create new applications using Groove Music as it offers both an API and an SDK.  The SDK is effectively a wrapper around the raw API.  Peter then shows us a quick demo of some of the nice touches to the API which includes the retrieval of album artwork.  The API allows retrieving album artwork of varying sizes and he shows us how, when requesting a small version of some album artwork that, for example, contains a face, the Groove API will use face detection algorithms to ensure that when dynamically resizing the artwork, the face remains visible and is not cropped out of the picture.

IMG_20151024_140031 The next Grok talk was by John Stovin and was all about a unit testing framework called Fixie.  John starts by asking, Why another unit testing framework?  He explains that Fixie is quite different from other unit testing frameworks such as NUnit or xUnit.  The creator of Fixie, Patrick Lioi, stated that he created Fixie as he wanted as much flexibility in his unit testing framework as he had with other frameworks he was using in his projects.  To this end, Fixie does not ship with any assertion framework, unlike NUnit and xUnit, allowing each Fixie user to choose his or her own Assertion framework.  Fixie is also very simple in how you author tests.   There’s no [Test] style attributes, no using Fixie statements at the top of test classes.   Each test class is simply a standard public class and each test method is simply a public method whos name ends in “Test”.  Test setup and teardown is similar to xUnit in that it simply uses the class constructor and Dispose methods to perform these functions.

IMG_20151024_140406 Interestingly, Fixie tests can inherit from a “Convention” base class which can change the behaviour of Fixie.  For example, a custom convention class can be implemented very simply to alter the behaviour of Fixie to be more like that of NUnit, with test classes decorated by a [TestFixture] attribute and test methods decorated by a [Test] attribute.  Conventions can control the discovery of tests, how tests are parameterized, how tests are executed and also how test output is displayed.

Fixie currently has lots of existing test-runners, including a command-line runner and a runner for the Visual Studio test explorer.  There’s currently a plug-in to allow ReSharper 8 to run Fixie tests, and a new plug-in/extension is currently being developed to work with ReSharper 10.  Fixie is open-source and is available on GitHub.

After John’s talk, we had the final Grok Talk of the lunch time, which was Steve Higgs’s ES6 Right Here, Right Now.  Steve’s talk is how developers can best use and leverage ES6 (ECMAScript 6 aka JavaScript 2015) today.  Steve starts by stating that, contrary to some beliefs, ES6 is no longer the “next” version of JavaScript, but is actually the “current” version.  The standard has been completely ratified, but most browsers don’t yet fully support it.

Steve talks about some of the nice features of ES6, many of which had to be implemented with 3rd-party libraries and frameworks.  ES6 has “modules” baked right in, so there’s no longer any need to use a 3rd-party module manager.  However, if we’re targeting today's browsers and writing JavaScript targeting ES5, we can use 3rd-party libraries to emulate these new ES6 features (for example, require.js for module management).

Steve continues by stating that ES6 will now (finally) have built-in classes.  Unfortunately, they’re not “full-featured” classes like we get in many other languages (such as C#, Java etc.) as they only support constructors and public methods, and have no support for things like private methods yet.  Steve does state that private methods can be “faked” in a bit of a nasty, hacky way, but ES6 classes definitely do not have support for private variables.  Steve states that this will come in the future, in ES7.

ES6 gets “arrow functions”, which are effectively lambda functions that we know and love from C#/LINQ, for example:

var a = [

// Old method to return length of each element.
var a2 ={ return s.length });

// New method using the new "arrow functions".
var a3 = s => s.length );

Steve continues by stating that ES6 introduces the let and const keywords.  let gives a variable block scoping rather than JavaScript’s default function scoping.  This is a welcome addition, and helps those of us who are used to working with languages such as C# etc. where our variable scoping is always block scoped.  const allows JavaScript to declare a constant.

ES6 now also has default parameters which allow us to define a default value for a function’s parameter in the event that code calling the function does not supply a value:

function doAlert(a=1) {

// Calling doAlert without passing a value will use the
// default value supplied in the function definition.

Steve also mentions how ES6 now has string interpolation, also known a “template strings”, so that we can finally write code such as this:

// Old way of outputting a variable in a string.
var a = 5;
var b = 10;
console.log("Fifteen is " + (a + b) + " and\nnot " + (2 * a + b) + ".");

// New ES6 way with string interpolation, or "template strings".
var a = 5;
var b = 10;
console.log(`Fifteen is ${a + b} and\nnot ${2 * a + b}.`);

One important point to note with string interpolation is that your string must be quoted using backticks (`) rather than the normal single-quote (‘) or double-quote (“) characters!  This is something that will likely catch a lot of people out when first using this new feature.

Steve rounds off his talk by stating that there’s lots of other features in ES6, and it’s best to simply browse through them all on one of the many sites that detail them.  Steve says that we can get started with ES6 today by using something like, which is a JavaScript compiler (or transpiler) and allows you to transpile JavaScript code that targets ES6 into JavaScript code that is compatible with the ES5 that is fully supported by today’s browsers.

After Steve’s talk, the Grok Talks were over, and with it the lunch break was almost over too.  There was a few minutes left to head back to the communal area and grab a cup of coffee and a bottle of water to keep me going through the two afternoon sessions and the two final sessions of the day.

IMG_20151024_143656 The first session of the afternoon was another change to the advertised session due to the previously mentioned cancellations.  This session was Pete Smith’s Beyond Responsive Design.  Pete’s session was aimed at design for modern web and mobile applications.  Pete starts with looking at a brief history of web development.  He says that the web started solely on the desktop and was very basic at first, but very quickly grew to become better and better.  Eventually, the Smartphone came along and all of these good looking desktop websites suddenly didn’t look so good anymore.

So then, Responsive Design came along.  This attempted to address the disconnect and inconsistencies between the designs required for the desktop and designs required for the mobile.  However, Responsive Design brought with it it’s own problems.  Our designs became awash with extensive media queries in order to determine which screen size we were rendering for, as well as became dependent upon homogenous (and often large) frameworks such as Zurb’s Foundation and Bootstrap.  Pete says that this is the focus of going “beyond” responsive design.  We can solve these problems by going back to basics and simplifying what we do.

So, how do we know if we've got a problem?  Well, Pete explains that there are some sites that work great on both desktop and mobile, but overall, they’re not as widespread as we would like given where we are in our web evolution.  Pete then shows some of the issues.  Firstly, we have what Pete calls the "teeny tiny" problem.  This is  where the entire desktop site is scaled and shrunk down to display on the smaller mobile screen size.  Then there's another problem that Pete calls "Indiana’s phone and the temple of zoom" which is where a desktop site, rendered on a mobile screen, can be zoomed in continuously until it becomes completely unusable.

Pete asks “what is a page on today’s modern web?”  Well, he says there’s no such thing as a single-page application.  There’s really no difference between SPA’s and non-SPA sites that use some JavaScript to perform AJAX requests to retrieve data from the server.  Pete states that there’s really no good guiding design principles.  When we’re writing apps for Android or iOS, there’s a wealth of design principles that developers are expected to follow, and it’s very easy for them to do so.  A shining example of this is Google’s Material Design.  When we’re designed for the web, though, not so much.

Dynamic-Data-Maksing-IbizaSo how do we improve?  Pete says we need to “design from the ground up”.  We need to select user-interface patterns that work well on both the desktop and on mobile.  Pete gives examples and states that UI elements like modal pop-ups and alerts work great on the desktop, but often not so well on mobile.  An example of a UI pattern that does work very well on both platforms are the “panes” (sometimes referred to as property sheets) that slide in from the side of the screen.  We see this used extensively on mobile due to the limited screen real estate, but not so much on the desktop, despite the pattern working well here also.  A great example of effective use of this design pattern is the new Microsoft Azure Preview Portal.  Pete states we should avoid using frameworks like Bootstrap or Foundation.  We should do it all ourselves and we should only revert to “responsive design” when there is a specific pattern that clearly works better on one medium than another and where no other pattern exists that works well on all mediums.

At this point in the talk, Pete moves on to show us some demo code for a website that he’s built to show off the very design patterns and features that he’s been discussing.  The code is freely available from Pete’s GitHub repository.  Pete shows his website first running on a desktop browser, then he shows the same website running on an iPad and then on a Smartphone.  Each time, due to clever use of design patterns that work well across screens of differing form factors, the website looks and feels very similar.  Obviously there are some differences, but overall, the site is very consistent.

Pete shows the code for the site and examines the CSS/LESS styles.  He says that absolute positioning for creating these kind of sites is essential.  It allows us to ensure that certain page elements (i.e.. the left-hand menu bar) are always displayed correctly and in their entirety.  He then shows how he's used CSS3 transforms to implement the slide in/out panels or “property sheets”, simply transforming them with either +100% or -100% of their horizontal positioning to display to the left or right of the element’s original, absolute position.  Pete notes how there’s extensive use of HTML5 semantic tags, such as <nav> <content> <footer> etc.  Pete reminds us that there’s no real behaviour attached to using these tags but that they make things far easier to reason about than simply using <div> tags for everything.

Finally, Pete summarises and says that if there’s only one word to take away from his talk it’s “Simplify”.  He talks about the future and mentions that the next “big thing” to help with building sites that work well across all of the platforms that we use to consume the web is Web Components.  Web Components aid encapsulation and re-usability.  They’re available to use today, however, they’re not yet fully supported.  In fact, they are only currently supported in Chrome and Opera browsers and need a third-party JavaScript library, Polymer.js, in order to work.

IMG_20151024_155657 The final session of the day was Richard Fennell’s Monitoring and Addressing Technical Debt With SonarQube.

Richard starts his session by defining technical debt.  He says it’s something that builds up very slowly, almost sneaks up on you.  It’s the little “cut corners” of our codebases where we’ve implemented code that seems to do the job, but is sub-optimal code.  Richard says that technical debt can grow to become so large that it can stop you in your tracks and prevent you from progressing with a project.

He then discusses the available tools that we currently have to address technical debt, but specifically within the world of Microsoft’s tooling.  Well, firstly we have compiler errors.  These are very easy to fix as we simply can’t ship our software with compiler errors, and they’ll provide immediate feedback to help us fix the problem.  Whilst compiler errors can’t be ignored, Richard says that it’s really not uncommon to come across projects that have many compiler warnings.  Compiler warnings aren’t errors as such, and as they don’t necessarily prevent us from shipping our code, we can often live with them for a long time.  Richard mentions the tools Visual Studio Code Analysis (previously known as FXCop) and StyleCop.  Code Analysis/FxCop works on your compiled code to determine potential problems or maintenance issues with the code, whilst StyleCop works on the raw source code, analysing it for style issues and conformance against a set of coding standards.  Richard says that both of these tools are great, but offer only a simple “snapshot in time” of the state of our source code.  What we really need is a much better “dashboard” to monitor the state of our code.

Richard asks, “So what would Microsoft do?”.  He continues to explain that the “old” Microsoft would go off and create their own solution to the problem, however, the “new” Microsoft, being far more amenable to adopting already-existing open source solutions, has decided to adopt the existing de-facto standard solution for analysing technical debt, a product called SonarQube by SonarSource.

Richard introduces SonarQube and states that, firstly, we must understand that it’s a Java based product.  This brings some interesting “gotchas” to the .NET developers when trying to set up a SonarQube solution as we’ll see shortly.  Richard states that SonarQube’s architecture is based upon it having a backend database to store the results of its analysis, and it also has plug-in analyzers that analyze source code.  Of course, being a Java-based product, SonarQube’s analyzers are written in Java too.  The Analyzers examine our source code and create data that is written into the SonarQube database.  From this data, a web-based front-end part of the SonarQube product can render a nice dashboard of this data in ways that help us to "visualise" our technical debt.   Richard points out that analyzers exist for many different languages and technologies, but he also offers a word of caution.  Not all analyzers are free and open source.  He states that the .NET ones currently are but (for example) the COBOL & C++ analyzers have a cost associated with them.

Richard then talks about getting an installation of SonarQube up and running.  As it’s a Java product, there’s very little in the way of nice wizards during the installation process to help us.  Lots of the configuration of the product is performed via manual editing of configuration files.  Due to this, Microsoft’s ALM Rangers group have produced a very helpful guide to help in installing the product.  The system requirements for installing SonarQube are a server running either Windows or Linux with a minimum of 1GB of RAM.  The server will need to have .NET framework 4.5.2 installed also, as this is required by the MSBuild runner which is used to run the .NET analyzer.  As it’s a Java product, obviously, Java is required to be installed on the server – either Oracle’s JRE 7 (or higher) or OpenJDK 7 (or higher).  For the required backend database SonarQube will, by default, install a database called H2, however this can be changed (and probably should be changed) to something more suited to .NET folks such as Microsoft’s SQL Server.  It’s worth noting that the free SQL Server Express will work just fine also.  Richard points out that there are some “gotchas” around the setup of the database, too.  As a Java-based product, SonarQube will be using JDBC drivers to connect to the database, and these place some restrictions on the database itself.  The database must have it’s collation set to Case Sensitive (CS) and Accent Sensitive (AS).  Without this, it simply won’t work!

After setup of the software, Richard explains that we’ll only get an analyzer and runner for Java source code out-of-the-box.  From here we can download and install the analyzer and runner we’ll need for analyzing C# source code.  He then shows how we need to add a special file called to the root of our project that will be analyzed.  This file contains four key values that are required in order for SonarQube to perform it’s analysis.  Ideally, we’d set up our installation of SonarQube on a build server, and there we’d also edit the SonarQube.Analyzers.xml file to reflect the correct database connection string to be used.

image3 Richard now moves onto showing us a demo.  He uses the OWASP demo project, WebGoat.NET for his demonstration.  This is an intentionally “broken” ASP.NET application which will allow SonarQube to highlight multiple technical debt issues with the code.  Richard shows SonarQube being integrated into Visual Studio Team Foundation Server 2015 as part of its build process.  Richard further explains that SonarQube analyzers are based upon processing complete folders or wildcards for file names.  He shows the  default SonarQube dashboard and explains how most of the errors encountered can often be found in the various “standard” libraries that we frequently include in our projects, such as jQuery etc.  As a result of this, it’s best to really think about how we structure our solutions as it’s beneficial to keep third-party libraries in folders separate from our own code.  This way we can instruct SonarQube to ignore those folders.

Richard shows us the rules that exist in SonarQube.  There are a number of built-in rules provided by SonarQube itself, but the C# analyzer plug-in will add many of it’s own.  These built-in SonarQube rules are called the “Sonar Way” rules and represent the expected Sonar way of writing code.  These are very Java-centric so may only be of limited use when analyzing C# code.  The various C# rule-sets are obviously more aligned with the C# language.  Some rules are prefixed with “CA” in the rule-set list and these are the FxCop rules, whilst other rules are prefixed with “S” in the rule-set list.  These are the C# language rules and use Roslyn to perform the code analysis (hence the requirement for the .NET framework 4.5.2 to be installed)

Richard continues by showing how we can set up “quality gates” to show if one of our builds is passing or failing in quality.  This is an analysis of our code by SonarQube as part of a build process.  We can set conditions on the history of the analyses that have been run to ensure that, for example, each successive build should have no more than 98% of known bugs of the previous release.  This way, we can reason that our builds are getting progressively better in quality each time.

Finally, Richard sums up by introducing a new companion product to SonarQube called SonarLint. SonarLint is based upon the same .NET Compiler platform, Roslyn, that provides SonarQube’s analysis, however SonarLint is designed to be run inside the Visual Studio IDE and provides near real-time instant feedback on the quality of our source code as we’re editing it.  SonarLint is open source and available on Github.

IMG_20151024_170210 After Richard’s talk was over, it was time for all of the conference attendees to gather in the main lecture hall for the final wrap-up presentation.  During the presentation, the various sponsors were thanked for all of their support.  The conference organisers did also mention how there had been a number of “no-shows” to the conference (people who’d registered to attend but had not shown up on the day and hadn’t cancelled their tickets despite repeated communication requesting people who can no longer attend to do so).  The organisers told us how every no-show not only costs the conference around £15 per person but also prevents those who were on the waiting list from being able to attend, and there was quite an extensive waiting list for the conference this year.  Here’s hoping that future DDD Conferences have less no-shows.

It was mentioned that DDD North is now the biggest of all of the regional DDD events, with some 450 (approx.) attendees this year – a growth on last year’s numbers – with still over 100 more people on the waiting list.  The organisers told us that they could, if it weren’t for space/venue size considerations, have run the conference for around 600-700 people.  That’s quite some numbers and shows just how popular the DDD conferences, and especially the DDD North conference, are.

2015-11-02 14_35_26-Technical whizzes set to share expertise in Sunderland - Sunderland EchoOne especially nice touch was that I did receive a quick mention from Andy Westgarth, the main organiser of DDD North, during the final wrap-up presentation for the use of one of my pictures for an article that had appeared in the local newspaper, the Sunderland Echo, that very day.  The picture used was one I’d taken in the same lecture hall at DDD North 2013, two years earlier.  The article is available to read online, too.

After the wrapping up came the prize draw.  As always, there was some nice prizes to be given away by both the conference organisers themselves as well as prizes to be given away by the individual sponsors including a Nexus 9 tablet, a Surface Pro 3 and a whole host of other goodies.  As was usual, I didn’t win anything, but I’d had a fantastic day at yet another superb DDD North.  Many thanks to the organisers and the various sponsors for enabling such a brilliant event to happen.

IMG_20151024_175352 But…  It wasn’t over just yet!   There is usually a “Geek Dinner” after the DDD conferences however on this occasion there was to be a food and drink reception graciously hosted by Sunderland Software City.  So as we shuffled out of the Sunderland University campus, I headed to my car to drive the short distance to Sunderland Software City.

IMG_20151024_175438 Upon arrival there was, unfortunately, no pop-up bar from Vaux Brewery as there had been two years prior.  This was a shame as I’m quite partial to a nice pint of real ale, however, the kind folks at Sunderland Software City had provided us with a number of buckets of ice-cold beers, wines and other beverages.  Of course, I was driving so I had to stick to the soft drinks anyway!

I was one of the first people to arrive at the Sunderland Software City venue as I’d driven the short distance from the University to get there, whereas most other people who were attending the reception were walking from the University.  I grabbed myself a can of Diet Coke and quickly got chatting to some fellow conference attendees sharing experiences about our day and getting to know each other and finding out what we do for a living and all about the work we do.

IMG_20151024_180512 Not too long after getting chatting, a few of the staff from the Centre were scurrying towards the door.  What we soon realised was that the “food” element of the “food & drink” reception was arriving.  We were being treated to what must be the single largest amount of pizza I’ve ever seen in one place.  76 delicious pizzas delivered from Pizza Hut!  Check out the photo of this magnificent sight!  (Oh, and those boxes of pizza are stacked two deep, too!)

So, once the pizzas had all been delivered and laid out for us on the extensive table top, we all got stuck in.  A few slices of pizza later and an additional can of Diet Coke to wash it down and it was back to mingling with the crowd for some more networking.

Before leaving, I managed to have a natter with Andy Westgarth, the main conference organiser about the trials and tribulations of running a conference like DDD North.  Despite the fact that Andy should be living and working in the USA by the time the next DDD North conference rolls around, he did assure me that the conference was in very safe hands and should continue on next year.

After some more conversation, it was finally time for me to leave and head off back to my in-laws in Newcastle.  And with that another superb DDD North conference was over.  Here’s looking forward to next year!

MVC Razor Views and automated Azure deployments

The other day, I decided that I’d publish a work-in-progress website to an Azure Website.  This was a free Website as part of the free package that Azure subscribers can use.  The website was a plain old vanilla ASP.NET MVC site.  Nothing fancy, just some models, some controllers , some infrastructure code and of course, some views.

I was deploying this to Azure via a direct connection to a private BitBucket Git repository I had within my BitBucket account.  This way, a simple “git commit” and “git push” would have, in a matter of seconds, my latest changes available for me to see in a real “on-the-internet” hosted site, thus giving me a better idea of how my site would look and feel than simply running the site on localhost.

An issue I almost instantly came up against was that, whenever I’d make a tiny change to just a view file – i.e. no code changes, just tweaks to HTML markup in the .cshtml Razor view file, and commit and push that change – the automated deployment process would kick in and the site would be successfully deployed to Azure, however, the newly changed Razor View would remain unchanged.

What on earth was going on?   I could run the site locally and see the layout change just fine, however, the version deployed to Azure was the older version, prior to the change.   I first decided that I now had to manually FTP the changed .cshtml files from my local machine to the relevant place in the Azure website, which did work, but was an inelegant solution.  I needed to find out why the changed Razor views were not being reflected in the Azure deployments.

image After some research, I found the answer.

Turns out that some of my view files (the .cshtml files themselves) had the “Build Action” property set to “None”.  In order for your view to be correctly deployed when using automated Azure deployments, the Build Action property must be set to “Content”.

Now, quite how the Build Action property had come to be set that way, I have no idea.  Some of the first views I’d added to the project had the Build Action set correctly, however, some of the newer views I’d added were set incorrectly.  I had not explicitly changed any of these properties on any files, so how some were originally set to Content and others set to None was a mystery.  And then I had an idea…..

I had been creating Views in a couple of different ways.  Turns out that when I was creating the View files using the Visual Studio context menu options, the files were being created with the correct default Build Action of Content.  However, when I was creating the View files using ReSharper’s QuickFix commands by hitting Alt + Enter on the line return View(); which shows the ReSharper menu allowing you to create a new Razor view with or without layout, it would create the View file with a default Build Action of None.

Bizarrely, attempting to recreate this issue in a brand new ASP.NET MVC project did not reproduce the issue (i.e. the ReSharper QuickFix command correctly created a View with a default Build Action of Content!)

I can only assume that this is some strange intermittent issue with ReSharper, although it’s quite probably caused by a specific difference between the projects that I’ve tested this on, thus far I have no idea what that difference may be…   I’ll keep looking and if I find the culprit, I’ll post an update here.

Until then, I’ll remain vigilant and always remember to double-check the Build Action property and ensure it’s set to Content for all newly created Razor Views.

OWIN-Hosted Web API in an MVC Project – Mixing Token-based auth with FormsAuth

One tricky little issue that I recently came across in a new codebase was having to extend an API written using ASP.NET Web API 2.2 which was entirely contained within an ASP.NET MVC project.  The Web API was configured to use OWIN, the abstraction layer which helps to remove dependencies upon the underlying IIS host, whilst the MVC project was configured to use System.Web and communicate with IIS directly.

The intention was to use Token-based Http Basic authentication with the Web API controllers and actions, whilst using ASP.NET Membership (Forms Authentication) with the MVC controllers and actions.  This is fairly easy to initially hook up, and all authentication within the Web API controllers was implemented via a customized AuthorizationFilterAttribute

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false)]
public class TokenAuthorize : AuthorizationFilterAttribute
    bool _active = true;

    public TokenAuthorize() { }

    /// <summary>
    /// Overriden constructor to allow explicit disabling of this filter's behavior.
    /// Pass false to disable (same as no filter but declarative)
    /// </summary>
    /// <param name="active"></param>
    public TokenAuthorize(bool active)
        _active = active;

    /// <summary>
    /// Override to Web API filter method to handle Basic Auth check
    /// </summary>
    /// <param name="actionContext"></param>
    public override void OnAuthorization(System.Web.Http.Controllers.HttpActionContext actionContext)
        // Quit out here if the filter has been invoked with active being set to false.
        if (!_active) return;

        var authHeader = actionContext.Request.Headers.Authorization;
        if (authHeader == null || !IsTokenValid(authHeader.Parameter))
            // No authorization header has been supplied, therefore we are definitely not authorized
            // so return a 401 unauthorized result.
            actionContext.Response = actionContext.ControllerContext.Request.CreateErrorResponse(HttpStatusCode.Unauthorized, Constants.APIToken.MissingOrInvalidTokenErrorMessage);

    private bool IsTokenValid(string parameter)
        // Perform basic token checking against a value
        // stored in a database table.
        return true;

This is hooked up onto a Web API controller quite easily with an attribute, applied at either the class or action method level:

public class ContentController : ApiController
    public IHttpActionResult GetContent_v1(int contentId)
        var content = GetIContentFromContentId(contentId);
        return Ok(content);

Now, the problem with this becomes apparent when a client hits an API endpoint without the relevant authentication header in their HTTP request.  Debugging through the code above shows the OnAuthorization method being correctly called and the Response being correctly set to a HTTP Status Code of 401 (Unauthorized), however, watching the request and response via a web debugging tool such as Fiddler shows that we’re actually getting back a 302 response, which is the HTTP Status code for a redirect.  The client will then follow this redirect with another request/response cycle, this time getting back a 200 (OK) status with a payload of our MVC Login page HTML.  What’s going on?

Well, despite correctly setting our response as a 401 Unauthorized, because we’re running the Web API Controllers from within an MVC project which has Forms Authentication enabled, our response is being captured higher up the pipeline by ASP.NET wherein Forms Authentication is applied.  What Forms Authentication does is to trap any 401 Unauthorized response and to change it into a 302 redirect to send the user/client back to the login page.  This works well for MVC Web Pages where attempts by an unauthenticated user to directly navigate to a URL that requires authentication will redirect the browser to a login page, allowing the user to login before being redirected back to the original requested resource.  Unfortunately, this doesn’t work so well for a Web API endpoint where we actually want the correct 401 Unauthorized response to be sent back to the client without any redirection.

Phil Haack wrote a blog post about this very issue, and the Update section at the top of that post shows that the ASP.NET team implemented a fix to prevent this exact issue.  It’s the SuppressFormsAuthenticationRedirect property on the HttpResponse object!

So, all is good, yes?   We simply set this property to True before returning our 401 Unauthorized response and we’re good, yes?

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false)]
public class TokenAuthorize : AuthorizationFilterAttribute
    // snip...
    public override void OnAuthorization(System.Web.Http.Controllers.HttpActionContext actionContext)
        var authHeader = actionContext.Request.Headers.Authorization;
        if (authHeader == null || !IsTokenValid(authHeader.Parameter))
            HttpResponse.SuppressFormsAuthenticationRedirect = true;
            actionContext.Response = actionContext.ControllerContext.Request.CreateErrorResponse(HttpStatusCode.Unauthorized, Constants.APIToken.MissingOrInvalidTokenErrorMessage);

Well, no.

You see, the SuppressFormsAuthenticationRedirect property hangs off the HttpResponse object.  The HttpResponse object is part of that System.Web assembly and it’s intimately tied into the underlying ASP.NET / IIS pipeline.  Our Web API controllers are all “hosted” on top of OWIN.  This, very specifically, divorces all of our code from the underlying server that hosts the Web API.  That actionContext.Response above isn't a HttpResponse object, it's a HttpResponseMessage object.  The HttpResponseMessage object is used by OWIN as it’s divorced from the underlying HttpContext (which is inherently tied into the underlying hosting platform – IIS) and as such, doesn’t contain, nor does it have access to a HttpResponse object, or the required SuppressFormsAuthenticationRedirect property that we desperately need!

There are a number of attempted “workarounds” that you could try in order to get access to the HttpContext object from within your OWIN-compliant Web API controller code, such as this one from Hongmei at Microsoft:

HttpContext context;
Request.Properties.TryGetValue<HttpContext>("MS_HttpContext", out context);

Apart from this not working for me, this seems quite nasty and “hacky” at best, relying upon a hard-coded string that references a request “property” that just might contain the good old HttpContext.  There’s also other very interesting and useful information contained within a Stack Overflow post that gets closer to the problem, although the suggestions to configure the IAppBuilder to use Cookie Authentication and then to perform your own login in the OnApplyRedirect event will only work in specific situations, namely when you’re using the newer ASP.NET Identity, which itself, like OWIN, was designed to be disconnected from the underlying System.Web / IIS host.  Unfortunately, in my case, the MVC pages were still using the older ASP.NET Membership system, rather than the newer ASP.NET Identity.

So, how do we get around this?

Well, the answer lies within the setup and configuration of OWIN itself.  OWIN allows you to configure and plug-in specific “middleware” within the OWIN pipeline.  This allows all requests and responses within the OWIN pipeline to be inspected and modified by the middleware components.  It was this middleware that was being configured within the Stack Overflow suggestion of using the app.UseCookieAuthentication.  In our case, however, we simply want to inject some arbitrary code into the OWIN pipeline to be executed on every OWIN request/response cycle.

Since all of our code to setup OWIN for the Web API is running within an MVC project, we do have access to the System.Web assembly’s objects.  Therefore, the fix becomes the simple case of ensuring that our OWIN configuration contains a call to a piece of middleware that wraps a Func<T> that merely sets the required SuppressFormsAuthenticationRedirect property to true for every OWIN request/response:

// Configure WebAPI / OWIN to suppress the Forms Authentication redirect when we send a 401 Unauthorized response
// back from a web API.  As we're hosting out Web API inside an MVC project with Forms Auth enabled, without this,
// the 401 Response would be captured by the Forms Auth processing and changed into a 302 redirect with a payload
// for the Login Page.  This code implements some OWIN middleware that explicitly prevents that from happening.
app.Use((context, next) =>
    HttpContext.Current.Response.SuppressFormsAuthenticationRedirect = true;
    return next.Invoke();

And that’s it!

Because this code is executed from the Startup class that is bootstrapped when the application starts, we can reference the HttpContext object and ensure that OWIN calls execute our middleware, which is now running in the context of the wider application and thus has access to the HttpContext object of the MVC project’s hosting environment, which now allows us to set the all-important SuppressFormsAuthenticationRedirect property!

Here’s the complete Startup.cs class for reference:

[assembly: OwinStartup("Startup", typeof(Startup))]
namespace SampleProject
    public class Startup
        public void Configuration(IAppBuilder app)
        private void ConfigureWebAPI(IAppBuilder app)
            var config = new HttpConfiguration();

            // Snip of other configuration.

            // Configure WebAPI / OWIN to suppress the Forms Authnentication redirect when we send a 401 Unauthorized response
            // back from a web API.  As we're hosting out Web API inside an MVC project with Forms Auth enabled, without this,
            // the 401 Response would be captured by the Forms Auth processing and changed into a 302 redirect with a payload
            // for the Login Page.  This code implements some OWIN middleware that explcitly prevents that from happening.
            app.Use((context, next) =>
                HttpContext.Current.Response.SuppressFormsAuthenticationRedirect = true;
                return next.Invoke();


SSH with PuTTY, Pageant and PLink from the Windows Command Line

I’ve recently started using Git for my revision control needs, switching from Mercurial that I’ve previously used for a number of years.  I had mostly used Mercurial from a GUI, namely TortoiseHg, only occasionally dropping to the command line for ad-hoc Mercurial commands.

In switching to Git, I initially switched to an alternative GUI tool, namely SourceTree, however I very quickly decided that this time around, I wanted to try to use the command line as my main interface with the revision control tool.  This was a bold move as the Git syntax is something that had always put me off Git and made me heavily favour Mercurial, due to Mercurial’s somewhat nicer command line syntax and generally “playing better” with Windows.

So, I dived straight in and tried to get my GitHub account all set up on a new PC, accessing Git via the brilliant ConEmu terminal and using SSH for all authentication with GitHub itself.  As this is Windows, the SSH functionality was provided by PuTTY, and specifically by the PLink and Pageant utilities within the PuTTY solution.

imageI already had an SSH Key generated and registered with GitHub, and the private key was loaded into Pageant, which was running in the background on Windows.  The first little stumbling block was to get the command line git tool to realise it had to use the PuTTY tools in order to retrieve the SSH Key that was to be used for authentication.

image This required adding an environment variable called GIT_SSH which points to the path of the PuTTY PLINK.exe program.  Adding this tells Git that it must use PLink, which acts as a kind of “gateway” between the program that needs the SSH authentication, and the other program – in this case PuTTY’s Pageant – that is providing the SSH Key.  This is a required step, and is not the default when using Git on Windows as Git is really far more aligned to the Unix/Linux way of doing things.  For SSH on Unix, this is most frequently provided by OpenSSH.

After having set up this environment variable, I could see that Git was attempting to use the PLINK.EXE program to retrieve the SSH key loaded into Pageant in order to authenticate with GitHub, however, there was a problem.  Although I definitely had the correct SSH Key registered with GitHub, and I definitely had the correct SSH Key loaded in Pageant (and Pageant was definitely running in the background!), I was continually getting the following error:


The clue to what’s wrong is there in the error text – The server that we’re trying to connect to, in this case it’s, does not have it’s RSA key “installed” on our local PC.  I say “installed” as the PuTTY tools will cache remote server RSA keys in the Windows registry.  If you’re using OpenSSH (either on Windows or more likely on Unix/Linux, they get cached in a completely different place). 

Although the error indicates the problem, unfortunately it gives no indication of how to correct it.

The answer lies with the PLINK.exe program.  We have to issue a special one-off PLINK command to have it connect to a remote server, retrieve that server’s RSA key, then cache (or “install”) the key in the registry to allow subsequent usage of PLINK as a “gateway” (i.e. when called from the git command line tool) to be able to authenticate the server machine first, before it even attempts to authenticate with our own SSH key.

The plink command is simply:

plink.exe -v -agent


plink.exe -v -agent

(the or parts of the command are the specific email addresses required when authenticating with the github or bitbucket servers, respectively).

The –v simply means verbose output and can be safely omitted.  The real magic is in the –agent command which instructs plink to use Pageant for the key:


Now we get the opportunity to actually “store” (i.e. cache or install) the key.  If we say yes, this adds the key to our Windows Registry:


Once we’ve completed this step, we can return to our command window and attempt our usage of git against our remote repository on either GitHub or BitBucket once more.  This time, we should have much more success:


And now everything works as it should!