Protecting your API using Amazon API Gateway and AWS WAF — Part I

When you build web applications or expose any data externally, you probably look for a platform where you can build highly scalable, secure, and robust REST APIs. As APIs are publicly exposed, there are a number of best practices for providing a secure mechanism to consumers using your API.

Amazon API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management.

In this post, I show you how to take advantage of the regional API endpoint feature in API Gateway, so that you can create your own Amazon CloudFront distribution and secure your API using AWS WAF.

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.

As you make your APIs publicly available, you are exposed to attackers trying to exploit your services in several ways. The AWS security team published a whitepaper solution using AWS WAF, How to Mitigate OWASP’s Top 10 Web Application Vulnerabilities.

https://aws.amazon.com/pt/blogs/compute/protecting-your-api-using-amazon-api-gateway-and-aws-waf-part-i/

How to create a REST API with pre-written Serverless Components

You might have already heard about our new project, Serverless Components. Our goal was to encapsulate common functionality into so-called “components”, which could then be easily re-used, extended and shared with other developers and other serverless applications.

In this post, I’m going to show you how to compose a fully-fledged, REST API-powered application, all by using several pre-built components from the component registry.

https://serverless.com/blog/how-create-rest-api-serverless-components/

Build a serverless multi-region, active-active backend solution in an hour

In the previous posts, we explored availability and reliability and the needs and means of building a multi-region, active-active architecture on AWS. In this blog post, I will walk you through the steps needed to build and deploy a serverless multi-region, active-active backend.

This actually blows my mind — since I will be able to explain this in one blog post and you should be able to deploy a fully functional backend in about an hour. In contrast, few years ago it would have required a lot more expertise, work, time and money!

https://read.acloud.guru/building-a-serverless-multi-region-active-active-backend-36f28bed4ecf

I’m afraid you’re thinking about AWS Lambda cold starts all wrong

When I discuss AWS Lambda cold starts with folks in the context of API Gateway, I often get responses along the line of:

Meh, it’s only the first request right? So what if one request is slow, the next million requests would be fast.

Unfortunately that is an oversimplification of what happens.

Cold start happens once for each concurrent execution of your function.

API Gateway would reuse concurrent executions of your function that are already running if possible, and based on my observations, might even queue up requests for a short time in hope that one of the concurrent executions would finish and can be reused.

https://hackernoon.com/im-afraid-you-re-thinking-about-aws-lambda-cold-starts-all-wrong-7d907f278a4f

How to add a custom domain to API Gateway with Serverless

With Serverless, it’s easier than ever to deploy production-ready API endpoints. However, using AWS API Gateway results in odd hostnames for your endpoints. Further, these hostnames will change if you remove and redeploy your service, which can cause problems for existing clients.

In this guide, I’ll show you how to map a custom domain name to your endpoints.

This post is the first in a three-part series. The next post will help you set up a Serverless web backend with Flask, a popular Python microframework. The last post will help you configure multiple Serverless services on the same domain name for maximum microservice awesomeness.

https://serverless.com/blog/serverless-api-gateway-domain/

Scaling your API with rate limiters

Availability and reliability are paramount for all web applications and APIs. If you’re providing an API, chances are you’ve already experienced sudden increases in traffic that affect the quality of your service, potentially even leading to a service outage for all your users.

The first few times this happens, it’s reasonable to just add more capacity to your infrastructure to accommodate user growth. However, when you’re running a production API, not only do you have to make it robust with techniques like idempotency, you also need to build for scale and ensure that one bad actor can’t accidentally or deliberately affect its availability.

Rate limiting can help make your API more reliable in the following scenarios:

  • One of your users is responsible for a spike in traffic, and you need to stay up for everyone else.
  • One of your users has a misbehaving script which is accidentally sending you a lot of requests. Or, even worse, one of your users is intentionally trying to overwhelm your servers.
  • A user is sending you a lot of lower-priority requests, and you want to make sure that it doesn’t affect your high-priority traffic. For example, users sending a high volume of requests for analytics data could affect critical transactions for other users.
  • Something in your system has gone wrong internally, and as a result you can’t serve all of your regular traffic and need to drop low-priority requests.

At Stripe, we’ve found that carefully implementing a few rate limiting strategies helps keep the API available for everyone. In this post, we’ll explain in detail which rate limiting strategies we find the most useful, how we prioritize some API requests over others, and how we started using rate limiters safely without affecting our existing users’ workflows.

https://stripe.com/blog/rate-limiters

Amazon API Gateway Integrates with AWS Certificate Manager (ACM)

You can now configure custom domains for your APIs on Amazon API Gateway using SSL/TLS certificates provisioned and managed by AWS Certificate Manager (ACM). Now, you can request a certificate from ACM and associate it with your API in minutes using the API Gateway Console, APIs, and CLI/SDKs. Previously, you needed to procure and upload your own SSL certificates to API Gateway in order to configure a custom domain for your APIs.

AWS Certificate Manager is a service that lets you easily provision, manage, and deploy Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services. AWS Certificate Manager removes the time-consuming manual process of purchasing, uploading, and renewing SSL/TLS certificates. SSL/TLS certificates provisioned through AWS Certificate Manager are free. You pay only for the AWS resources you create to run your application.

Visit here to learn more about this feature. Please visit our product page for more information about Amazon API Gateway.

https://aws.amazon.com/about-aws/whats-new/2017/03/amazon-api-gateway-integrates-with-aws-certificate-manager-acm