Error Handling in AWS Lambda With Wrappers

We have been using web frameworks to develop web applications since long before serverless came around, and middlewares are stable in these web frameworks. Express.js, for instance, lets you create middlewares at several stages of the request handling pipeline, and even ships with a few common middlewares out of the box.

As our code moves into Lambda functions and we move away from these web frameworks, are middlewares still relevant? If so, how might they look in this new world of serverless?

In this post, we’ll revisit the idea of middlewares, their role in application development with AWS Lambda, and how we can use middlewares to enforce consistent error handling across all of our Lambda functions.

https://epsagon.com/blog/enforce-consistent-error-handling-in-aws-lambda-with-wrappers/

Advertisements

How to add file upload features to your website with AWS Lambda and S3

The mechanism for uploading files from a browser has been around since the early days of the Internet. In the server-full environment it’s very easy to use Django, Express, or any other popular framework. It’s not an exciting topic — until you experience the scaling problem.

Imagine this scenario — you have an application that uploads files. All is well until the site suddenly gains popularity. Instead of handling a gigabyte of uploads a month, usage grows to 100Gb an hour for the month leading up to tax day. Afterwards, the usage drops back down again for another year. This is exactly the problem we had to solve.

File uploading at scale gobbles up your resources — network bandwidth, CPU, storage. All this data is ingested through your web server(s), which you then have to scale — if you’re lucky this means auto-scaling in AWS, but if you’re not in the cloud you’ll also have to contend with the physical network bottleneck issues.

You can also face some difficult race conditions if your server fails in the middle of handling the uploaded file. Did the file make to its end destination? What was the state of the processing? It can be very hard to replay the steps to failure or know the state of transactions when the server is overloaded.

Fortunately, this particular problem turns out to be a great use case for serverless — as you can eliminate the scaling issues entirely. For mobile and web apps with unpredictable demand, you can simply allow the application to upload the file directly to S3. This has the added benefit of enabling an https endpoint for the upload, which is critical for keeping the file’s contents secure in transit.

All this sounds great — but how does this work in practice when the server is no longer there to do the authentication and intermediary legwork?

https://read.acloud.guru/how-to-add-file-upload-features-to-your-website-with-aws-lambda-and-s3-48bbe9b83eaa

Throttling Third-Party API calls with AWS Lambda

In the serverless world, we often get the impression that our applications can scale without limits. With the right design (and enough money), this is theoretically possible. But in reality, many components of our serverless applications DO have limits. Whether these are physical limits, like network throughput or CPU capacity, or soft limits, like AWS Account Limits or third-party API quotas, our serverless applications still need to be able to handle periods of high load. And more importantly, our end users should experience minimal, if any, negative effects when we reach these thresholds.

There are many ways to add resiliency to our serverless applications, but this post is going to focus on dealing specifically with quotas in third-party APIs. We’ll look at how we can use a combination of SQS, CloudWatch Events, and Lambda functions to implement a precisely controlled throttling system. We’ll also discuss how you can implement (almost) guaranteed ordering, state management (for multi-tiered quotas), and how to plan for failure. Let’s get started!

https://www.jeremydaly.com/throttling-third-party-api-calls-with-aws-lambda/

Multi-region serverless backend 

In 2018, I wrote a series of blog posts on building a multi-region, active-active, serverless architecture on AWS [1, 2, 3 and 4]. The solution was built using DynamoDB Global Tables, Lambda, the regional API Gateway feature, and Route 53 routing policies. It worked well as a resiliency pattern and as a disaster recovery (DR) strategy . But there was an issue.

https://medium.com/@adhorn/multi-region-serverless-backend-reloaded-1b887bc615c0

AWS Lambda And Python Boto3: To Bundle Or Not Bundle With Your Function

In the engineering world a lot of our practices, even at times our best practices, are often just common wisdom passed along from one person to another. With Stack Overflow, Slack, and even Twitter, it’s easier today than it ever was for ideas to propagate. However, a lot of what passes for common wisdom is really just widely held opinions. And nothing says common wisdom has to be right. Where I ran into this distinction recently was with Python’s Boto3 modules (boto3 and botocore) and whether or not I should bundle them with my AWS Lambda deployment artifact.

Recently I found out the common wisdom I’ve adhered to was wrong. (Yes, someone on the internet was wrong.) Like many people, I use the Boto3 modules provided by the AWS Lambda runtime. However after talking with several folks at AWS I discovered, you should not be using the AWS Lambda runtime’s boto3 and botocore module. And you shouldn’t use botocore’s vendored version of the requests module whether no matter what instance of botocore you are using. I’ll explain how I found this out and explore why more than just me have probably gotten this best practice wrong.

https://www.serverlessops.io/blog/aws-lambda-and-python-boto3-bundling

Using API Gateway WebSockets with the Serverless Framework

As we approach the end of 2018, I’m incredibly excited to announce that we at Serverless have a small gift for you: You can work with Amazon API Gateway WebSockets in your Serverless Framework applications starting right now.

But before we dive into the how-to, there are some interesting caveats that I want you to be aware of.

First, this is not supported in AWS CloudFormation just yet, though AWS has publicly stated it will be early next year! As such, we decided to implement our initial support as a plugin and keep it out of core until the official AWS CloudFormation support is added.

Second, the configuration syntax should be pretty close, but we make no promises that anything implemented with this will carry forward after core support. And once core support is added with AWS CloudFormation, you will need to recreate your API Gateway resources managed by CloudFormation. This means that any clients using your WebSocket application would need to be repointed, or other DNS would have needed to be in place, to facilitate the cutover.

I recommend you check out my original post for a basic understanding of how WebSockets works at a technical level via connections and callbacks to the Amazon API Gateway connections management API.

With all that out of the way, play with our new presents!

https://serverless.com/blog/api-gateway-websockets-example/