AWS Builders Day Notes

Tuesday 27 Feb 2018 at 21:00
Conferences  |  conferences development aws cloud

This post represents my personal rough notes from the AWS Builders Day Conference held at the Victoria Warehouse in Manchester on 27th February 2018.

At the event, there was an area where attendees could "Ask An Architect". I spoke with one of the AWS architects there to ask about automating the creation of EC2 instance snapshots then automating the creation of AMI's and auto-scaling lauch configurations from those snapshots. He said that there's no way to automate it via the AWS Web Console and that performing the operations using the web console would always be a manual process.

He said that it would be possible to automate, but that you'd need to use the AWS CLI interface and build your own script around automating the individual AWS CLI calls required. You would have to install/update the required software on the running instance manually first, then the script would terminate the instance, create the snapshot, AMI and launch config automatically.

It would be possible to create the script using a language supported by AWS Lambda and so the script could target the AWS framework exposed to lambda functions and so the entire operation could then be performed by invoking the lambda function. The script could also be created with Bash or PowerShell and could be then either run manually or as part of an automated build/maintenance job.

Serverless: State of the union - Ian Massingham @ianmmmm

  • What is serverless? No server management, Flexible scaling, High availability, No idle capacity.
  • AWS Serverless functions (i.e. Lambda) automatically use the underlying AWS primitives of Regions, Availability Zones, Data Centres etc. to scale and provide redundancy. This is abstracted away and performed automatically.
  • Pricing is always based on requests. You never pay when the function is not running.
  • Standard set-up of a "serverless app". Uses API Gateway for API, using DynamoDB for data storage, static content served from AWS S3 & Cloudfront, dynamic content provided by AWS Lambda functions.
  • AWS Lambda now has a "Serverless Application Repository" as part of the "Create Function" for lambda functions.
  • AWS has "SAM" (Serverless Application Model) which is an extension for CloudFormation templates to make it easier to provision the required resources for common serverless applications.
  • AWS Lambda has a new code editor in the AWS web console. It's based on AWS Cloud 9 which is a cloud-based IDE.
  • AWS Dynamo DB has a local version, "DynamoDB Local" - Can this run outside of AWS on local machines?
  • AWS Lambda has a limitation of 1000 concurrent executions of functions (this works across all of your functions - not just one function) within a single region. Amazon allows you to exceed this, but only after requesting a increase manually.
  • AWS Lambda functions are deployed and run on a container that contains the runtime and lambda code. This is done by the AWS infrastructure automatically.
  • AWS now has AWS X-Ray which allows instrumentation and monitoring within AWS Lambda functions.
  • AWS Lambda now has optimized cold-start with up to 80% faster start-up times. This is especially useful for byte-code languages such as Java & C#.
  • AWS Memory capacity has been increased from 1.5GB to 3GB.
  • AWS Lambda functions have a time-run cap of 5 minutes.
  • AWS Lambda now supports .NET Core 2.0 and Go.
  • AWS Cloud9 costs about $1.60 per developer per month. Only runs in the browser at the moment so need internet connection. Built-in Github support, Lambda blueprints (templates?), built in "SAM local" (allows "local" testing/debugging - it's still in AWS but not fully deployed to AWS Lambda). Can push code changes to Github or any other git enabled endpoint for online/offline editing (But needs to be online to really "use" so what's the point of offline?).
  • Can set up a CI/CD pipeline in AWS. Use AWS CodeCommit or Github to get code (only those two?), Use AWS CodeBuild for building (uses a buildspec.yaml file for define build process - all done on a clean VM each time), Deploy is done with AWS CloudFormation for infrastructure creation then AWS CodeDeploy to deploy build artefacts.
  • AWS CodeStar is an overarching service that provides a dashboard and orchestration of all of the previous CI/CD steps and AWS tools used.
  • AWS CodeDeploy now integrates with AWS Lambda!
  • AWS Lambda includes "weighted aliases". This allows splitting of incoming traffic to two or more versions of a function (i.e. 90% of traffic to old version, 10% of traffic to new version).
  • AWS API Gateway includes Sub-stages allowing shifting of traffic from one version to another.
  • AWS Lambda has concurrency metrics and per-function concurrency throttles. This allows constraining the concurrent executions of a function. Most useful when an AWS function is used to interact with on-premise enterprise systems with limited concurrency, or as a means of cost-limiting to prevent too many copies of a function running at the same time.
  • AWS API Gateway allows creating endpoints that are private to a given AWS VPC (PrivateLink). i.e. internal API endpoints rather than public.
  • AWS Cloudwatch now supports structured logging for API's.
  • AWS CloudTrail (tail logging) now supports AWS Lambda functions.

Serverless Development - Paul Maddox - @paulmaddox - (

  • Paul is the creator of AWS SAM Local which allows running and debugging AWS Lambda and API Gateway locally on your own machine!
  • AWS Lambda natively supports NodeJS, Python, Java & C# but can write a shim in any of these languages to invoke a binary written in another language as AWS Lambda is just Amazon Linux under the hood. If it can run on Amazon Linux, it'll work.
  • Serverless use cases are: web apps, backends for mobile apps / iot, data processing (real-time, mapreduce, batch) and others.
  • 3GB Lambda functions get an extra vCPU! (See photo for costings).
  • Lambdas can be triggered by stream-based things like DynamoDB and Kinesis as well as synchronous (api / web) or async (triggered by S3 upload etc.).
  • Easiest and fastest way to get to the "next level" of AWSS Lmabda development up from just editing functions in the web console is to use AWS Codestar. It mimics the way Amazon employees create new projects internally. It created the source code repo, ci/cd pipeline, cloudformation templates and codebuilder/codedeploy projects. This could all be done manually, but codestar simply automates the creation of all these services.
  • Paul creates a demo app using Github for source control and clones it to his local machine and fires up Visual Studio Code to edit his nodejs file that is his lambda function.
  • AWS SAM Local uses Docker to run the AWS runtimes on your machine so you can "deploy" your lambda functions to the local AWS SAM Local hosted AWS Lambda.
  • For languages that require it, you can create a build script that performs the complation/build locally prior to "deploying" to SAM Local. This allows using the build artifact in SAM Local.
  • SAM Local is completely open source! Pre-requisite is having Docker installed locally.
  • SAM (Serverless Application Model) is an open standard (Apache 2.0). It's an extension to CloudFormation to allow easier creation of the resources required for a serverless application.
  • SAM templates can have cloud formation sections added to it, but SAM templates greatly simplify the resource requirements (See photo).
  • Can develop and test Lambda on local machine in an environment that resembles Lambda (OS, Libraries, Runtime, Configured Limits).
  • You can use DynamoDB locally too with Dynamo DB.
  • BUT. You can only really replicate the "compute" part of the serverless app locally, not the entirety of your cloud infrastructure. So need some amount of internet connectivity. So if your LAmbda function calls something in AWS (another lambda or API etc.) you'll need connectivity to AWS for this!!!!!!
  • AWS CodeBuild is the place to add in things like unit testing. edit the buildspec.yaml file to achieve this.
  • There's a separate framework called "Serverless framework" ( that can be used with AWS (and other cloud providers). It apparently allows exporting a SAM template file, but it's a completely different way of doing serverless than SAM. (Actuallly, precedes amazon's own SAM/SAM Local stuff I think).
  • During Q&A - Paul says that there's a lot of customer requests for support of BitBucket and other stuff But not hard timescales for this to be implemented.

Serverless Architectural Patterns - Adrian Hornsby - @adhorn

  • Big difference between async and stream-based lambda functions is ordered. stream based is guaranteed to be ordered. async is not.
  • lambda best practices: minimize package size, separate handler from function core, use env. vars to modify operational behaviour, leverage memory management to "right-size" functions, delete old functions
  • use timed invocations (ie. every 5 minutes) of lambda functions to both act as a health check for the function and to keep the function "warm" (i.e. prevent excessive cold start times).
  • lambda started life purely as compute on top of S3 only! ie. do some small compute on an S3 event (i.e. add a file)
  • lambda can be seen as "message passing with compute on top"
  • a very common use case is user input >>> S3 bucket >>>> lambda function >>> another S3 bucket >>> lambda function >>>> (s3 / RDS etc.).
  • This can lead to orchestration difficulties when there's many lambda functions in this "chain". This is where AWS Step Functions come in. It allows orchestration of many lambda functions - i.e. creates a state machine for the functions/process.
  • Adrian shows a demo of a web page that allows uploading a photo, it adds it to an s3 bucket, runs lambda function to examine image and extract tags, then resize and add the resized image to a DynamoDB data store. It's a demo project available on github (adrian forgot to put the URL on the slides!)
  • CapitalOne bank has something called "cloud custodian". it runs lambda functions in response to cloudwatch, cloudtrail logs and events to generate SNS notifications for compliance purposes.
  • - it uses amazon polly (which is a service that creates sounds from text) with lambda to create an mp3 of spoken version of some text in repsonse to an RSS feed request.
  • Serverless Web application architecture uses cloudfront in front of S3 for static content (css, js libs etc.) and api gateway in front of lambda which in turn is in front of databases for dynamic content. AWS Cognito is used for authentication purposes.
  • API Gateway supports throttling, caching and usage plans. usage plans allow creating "tiers" of api usage so you can resell access to your api based on this.
  • API Gateway also supports custom authorisation providers. This can be openid, saml or even on-premise providers.
  • data processing architecture can be accomplished with kinesis. allows processing real-time, streaming data.
  • There's a tool now is S3 to examine how S3 data is used so can easily tell if should be moved to S2 IA (Infrequent access) or Glacier.
  • AWS Kinesis is a service to "shard" stream data - i.e. split data into multiple "lines" (think added extra lanes to a motorway to increase traffic and number of cars that can pass a given point in a given amount of time).
  • Kinesis streams allows sending data to EC2 instances and other outputs, Kinesis Firehose only allows output to S3, ElasticSearch or RedShift.
  • A good case study for Kinesis is Supercell. They're a gaming company who created Clash of Clans and they use Kinesis to analyse data in real time to see how users interact with their games. 45 million pieces of data per day!
  • MQTT can be used to send responses/messages from some compute function to thousands of connected devices (i.e. mobile phones).
  • Adrian shows a demo that has all attendees selecting one of four quadrants and he shows a webpage on stage that shows percentages of who's selecting which quadrant.
  • He then shows another demo where we visit a web page with a lightbulb graphic on it and he controls the brightness of the lightbulb and it responds in near real-time on our screens. Adrian reminds us that the traffic is routing via a data centre in Virginia, USA!

Building Global Serverless Backends powered by Amazon DynamoDB Global Tables - Adrian Hornsby - @adhorn

  • Werner Vogels (CTO - says "Failures are a given. Everything will fail eventually". We need to embrace failure, expect it and work with it.
  • System failures can be early failures, wear out failures (configuration drift - difference between running version and source controlled version), random failures and observed failures.
  • did 50 million deployments in the year of 2014!!!!! (1.3 per second!!!!!)
  • System availability = normal operation time / total time = MTBF / MTBF + MTTR (MTBF = mean time between failure - MTTR = mean time to repair)
  • Availability of system is series reduces overall availability of entire system below that of each and every component. Availability of system in parallel increases overall availability of entire system greater than any and all components of the system.
  • generally speaking, a reliable system usually has high availability too, but a highly available system is not necessarily reliable.
  • Use exponential back-off when attempting to retry connections to a failed system.
  • message passing for async patterns - use message queues to decouple publishers and consumers of messages and increase resilience.
  • Nexflix uses something like 160 separate microservices just for the UI!
  •'s sales are affected by 1% for every 100ms additional latency within the website!!!
  • Use active multi-region architectures to improve latency for end-users (content is closer to the user) and improve disaster recovery.
  • Netflix now runs active in 3 different AWS regions - 2 US and 1 EU.
  • Netflix's Simian Army (the chaos monkey) is now able to not only kill services or instances, but can now kill a whole AWS region to simulate how the overall netflix architecture can survive.
  • Amazon now has it's own backbone - a global 100GBE network connecting all of the AWS regions across the entire world. There's even private capacity reserved on the network between all regions except china. This allows AWS VPC's to now span multiple regions as easily as they would span multiple availability zones within the same region!!!
  • AWS allows cross-region read replicas for Amazon RDS. Aurora, MySQL, MariaDB and PostgreSQL are all supported.
  • By the end of this year (2018) AWS will support multiple cross-region master nodes for Aurora DB.
  • Adrian talks about DynamoDB - amazon's NoSQL database. Amazon's own website uses DynamoDB for it's data. He shares some statistics about performance measured from Amazon Prime Day (See photo).
  • DynamoDB supports "global tables". These are tables that are created in one AWS region and configured to also exist in one or more other regions. Adding items to the table in any one of the regions will automatically replicate the item to all other regions! These updates happen in 500ms or less!
  • Interestingly, many services in AWS can be used in multi regions without increasing costs as serverless costs are based on requests. Therefore, for a given number of user making a given number of requests, if this stays constant it'll cost the same when services by one region as it will services by multiple regions.

Building Advanced Serverless Applications with AWS Step Functions - Paul Maddox - @paulmaddox -

  • Paul talks about baseball in the USA. On TV, they use military grade radar to measure the ball when a batter hits it 2000 per second then use lambda to perform real time compute based on that data in order to be able to display on-screen graphics with stats of likelihood of batter making first, second, third base etc.!
  • When moving to serverless functions from "standard" apps there's a number of concerns, management of state is one such concern.
  • Paul talks about state machines. He shares a quote from the internet that says that any sufficiently complex model probably contains a badly implemented state machine. He also shares another quote, however, that explains why developers don;t use state machines explicitly. It's because software is built up over time not born being fully formed. We iterate on models and only at some later point realise that a state machine would have been the best way to model it.
  • AWS Step Functions use state machines to define the state that an "application" composed of multiple functions can use.
  • AWS Lambda functions cannot run for more than 5 minutes. AWS Step Functions state machines can store state for up to 1 year!
  • AWS Step Functions is better than a simple "chain" of multiple lambda functions as it can correctly handle failures that may occur half way through that chain of functions.
  • AWS Step Functions can have decision branches - i.e. if processing images - if JPG do this, if PNG to that.. There's even simple "wait" states to wait for an arbitrary amount of time.
  • The State machine is expressed as JSON using ASL (Amazon States Language).
  • You can build visual workflows with the aws step function state machine editor, but it can integrate with many other AWS services.
  • One use case is an email that requests a user to approve or deny something. The approval/denial can be an API gateway endpoint that talks back to an AWS step function state machine to advance a workflow based upon that users decision.
  • - A pre-built step function to help manage EBS snapshots and automatically clean them up / move them to DR environment etc.
  • Thomson Reuters need to convert 350 videos per day into 14 different formats. Was taking the same length of time as the runtime of the video. Now they split the video at certain key-frames (i.e. perhaps a few seconds of video) then encoding that. They do this in parallel and re-combine the video segments at the end. This is done with a state machine. From this they also get full history and audit trail of how each and every video was split and processed.
  • Within state machine, can easily create retry of steps that can have maximum retries configured and also exponential back-off for each subsequent retry.
  • If you're hitting the 5 minute limit or find yourself combining / chaining lambda functions together, this is a very good indicator that you could benefit from AWS Step Functions.
  • 7 state types - task - a unit of work, choice - adds branching logic, parallel - fork and join the data across tasks, wait - delay for a specified time, fail - stop execution and mark it as failed, succeed - stops execution and mark as success, pass - passes it's input to it's output.
  • Step functions pricing is based on state transitions