Amazon Web Services (AWS) recently announced that Simple Queue Service (SQS) is finally a supported event source for Lambda. This is extremely exciting news, as I have been waiting for this for two long years! It got me thinking about what other features I am desperately waiting to see from AWS Lambda. After some quick brainstorming, here is my wish list for Lambda for 2018. These items would address many recurring challenges Lambda users face in production, including:
- better monitoring at scale
- cold start performance
- scalability in spiky load scenarios
So, I hope someone from the Lambda team is reading this. Here we go!
Do you have scheduled or long-running task on AWS ECS cluster and want to get notified when it fails? You can subscribe to ECS event stream with AWS CloudWatch Event rules and use Amazon SNS to send notifications to your email when container state changes.
The following example uses Serverless Framework to set up a service that sends an email to you when the container stops with the non-zero exit status. You find the sources for this example from GitHub. It is the same service that we are going to install here with Serverless Framework.
You might have already heard about our new project, Serverless Components. Our goal was to encapsulate common functionality into so-called “components”, which could then be easily re-used, extended and shared with other developers and other serverless applications.
In this post, I’m going to show you how to compose a fully-fledged, REST API-powered application, all by using several pre-built components from the component registry.
Corral is a MapReduce framework designed to be deployed to serverless platforms, like AWS Lambda. It presents a lightweight alternative to Hadoop MapReduce. Much of the design philosophy was inspired by Yelp’s mrjob — corral retains mrjob’s ease-of-use while gaining the type safety and speed of Go.
Corral’s runtime model consists of stateless, transient executors controlled by a central driver. Currently, the best environment for deployment is AWS Lambda, but corral is modular enough that support for other serverless platforms can be added as support for Go in cloud functions improves.
Corral is best suited for data-intensive but computationally inexpensive tasks, such as ETL jobs.
More details about corral’s internals can be found in this blog post.
In this blog post, I’ll walk you through the steps for setting up continuous replication of an AWS CodeCommit repository from one AWS region to another AWS region using a serverless architecture. CodeCommit is a fully-managed, highly scalable source control service that stores anything from source code to binaries. It works seamlessly with your existing Git tools and eliminates the need to operate your own source control system. Replicating an AWS CodeCommit repository from one AWS region to another AWS region enables you to achieve lower latency pulls for global developers. This same approach can also be used to automatically back up repositories currently hosted on other services (for example, GitHub or BitBucket) to AWS CodeCommit.
This solution uses AWS Lambda and AWS Fargate for continuous replication. Benefits of this approach include:
- The replication process can be easily setup to trigger based on events, such as commits made to the repository.
- Setting up a serverless architecture means you don’t need to provision, maintain, or administer servers.
- You can incorporate this solution into your own DevOps pipeline. For more information, see the blog Invoke an AWS Lambda function in a pipeline in AWS CodePipeline.
Note: AWS Fargate has a limitation of 10 GB for storage and is available in US East (N. Virginia) region. A similar solution that uses Amazon EC2 instances to replicate the repositories on a schedule was published in a previous blog and can be used if your repository does not meet these conditions.
I’ve been helping out on a project recently where we’re doing a number of integrations with third-party services. The integration platform is built on AWS Lambda and the Serverless framework.
Aside from the data hygiene questions that you might expect in an integration project like this, one of the first things we’ve run into is a fundamental constraint in productionizing Lambda-based systems. As of today, AWS Lambda has the following limits (among others):
- max of 1000 concurrent executions per region (a soft limit that can be increased), and
- max duration of 5 minutes for a single execution