Automating AWS Lambda Function Error Handling with AWS Step Functions

AWS Step Functions makes it easy to coordinate the components of distributed applications and microservices using visual workflows. You can scale and modify your applications quickly by building applications from individual components, each of which performs a discrete function.

You can use Step Functions to create state machines, which orchestrate multiple AWS Lambda functions to build multi-step serverless applications. In certain cases, a Lambda function returns an error. Regardless of whether the error is a function exception created by the developer (e.g., file not found), or unpredicted (e.g., out of memory), Step Functions allows you to respond with conditional logic based on the type of error message in the form of function error handling.

https://aws.amazon.com/blogs/compute/automating-aws-lambda-function-error-handling-with-aws-step-functions

Best practices – AWS Lambda function

In this post we are going to go through the best practices while building an AWS Lambda function. In order to go through this blog you should know what is it and you should ideally have built at least a simple function around it so you have your bearings right, you can checkout this blog post of mine where I take you through step by step in building your first AWS Lambda function.

Understanding how AWS Lambda scales

AWS Lambda service is simply, infrastructure which gets allocated to your function on demand as per need. When the need increases new infrastructure is automatically created internally which executes your function. The size of the unit of infrastructure is defined by you when you create the function, AWS allows us to select memory for the function and CPU allocation is directly proportional to the memory that you chose, what this means is that if you choose 128MB of memory you get x CPU while choosing 256MB gives you 2x of the same.

Lambda scales on the basis of unit of work, while unit of work

varies on the Lambda source type. Each unit of work is executed on a

dedicated infrastructure while the said infrastructure is open for

re-use for subsequent calls but not while the current call is

executing, while you pay for only the duration of execution of individual requests proportional to the memory allocation of the function.

Best practices – AWS Lambda function

New – Manage DynamoDB Items Using Time to Live (TTL)

AWS customers are making great use of Amazon DynamoDB. They love the speed and flexibility and build Ad Tech (reference architecture), Gaming (reference architecture), IoT (reference architecture), and other applications that take advantage of the consistent, single-digit millisecond latency. They also love the fact that DynamoDB is a managed, serverless database that scales to handle millions of requests per second to tables that are many terabytes in size.

Many DynamoDB users store data that has a limited useful life or is accessed less frequently over time. Some of them track recent logins, trial subscriptions, or application metrics. Others store data that is subject to regulatory or contractual limitations on how long it can be stored. Until now, these customers implemented their own time-based data management. At scale, this sometimes meant that they ran a couple of Amazon Elastic Compute Cloud (EC2) instances that did nothing more than scan DynamoDB items, check date attributes, and issue delete requests for items that were no longer needed. This added cost and complexity to their application.

New Time to Live (TTL) Management
In order to streamline this popular and important use case, we are launching a new Time to Live (TTL) feature today. You can enable this feature on a table-by-table basis, specifying an item attribute that contains the expiration time for the item.

Once the attribute has been specified and TTL management has been enabled (a single API call takes care of both operations), DynamoDB will find and delete items that have expired. This processing takes place automatically and in the background and does not affect read or write traffic to the table.

You can use DynamoDB streams (see Sneak Preview – DynamoDB Streams for more info) to process or archive the actual deletions. Like other update records in a stream, the deletions are available on a rolling 24-hour basis. You can move the expired items to cold storage, log them, or update other tables using AWS Lambda and DynamoDB Triggers.

Here’s how you enable TTL for a table and specify the desired attribute:

The attribute must be in DynamoDB’s Number data type, and is interpreted as seconds per the Unix Epoch time system.

As you can see from the screen shot above, you can also enable DynamoDB Streams, and you can look at a preview of the items that will be deleted when you enable TTL.

You can also call the UpdateTimeToLive function from your code, or you can use the update-time-to-live command from the AWS Command Line Interface (CLI).

https://aws.amazon.com/pt/blogs/aws/new-manage-dynamodb-items-using-time-to-live-ttl/

Amazon EBS Update – New Elastic Volumes Change Everything

New Elastic Volumes
Today we are launching a new EBS feature we call Elastic Volumes and making it available for all current-generation EBS volumes attached to current-generation EC2 instances. You can now increase volume size, adjust performance, or change the volume type while the volume is in use. You can continue to use your application while the change takes effect.

This new feature will greatly simplify (or even eliminate) many of your planning, tuning, and space management chores. Instead of a traditional provisioning cycle that can take weeks or months, you can make changes to your storage infrastructure instantaneously, with a simple API call.

You can address the following scenarios (and many more that you can come up with on your own) using Elastic Volumes:

Changing Workloads – You set up your infrastructure in a rush and used the General Purpose SSD volumes for your block storage. After gaining some experience you figure out that the Throughput Optimized volumes are a better fit, and simply change the type of the volume.

Spiking Demand – You are running a relational database on a Provisioned IOPS volume that is set to handle a moderate amount of traffic during the month, with a 10x spike in traffic  during the final three days of each month due to month-end processing.  You can use Elastic Volumes to dial up the provisioning in order to handle the spike, and then dial it down afterward.

Increasing Storage – You provisioned a volume for 100 GiB and an alarm goes off indicating that it is now at 90% of capacity. You increase the size of the volume and expand the file system to match, with no downtime, and in a fully automated fashion.

https://aws.amazon.com/blogs/aws/amazon-ebs-update-new-elastic-volumes-change-everything/

Configure dead letter queues in Lambda for more reliable event processing

By default, a failed Lambda function invoked asynchronously is retried twice, and then the event is discarded. Using Dead Letter Queues (DLQ), you can indicate to Lambda that unprocessed events should be sent to an Amazon SQS queue or Amazon SNS topic instead, where you can take further action.

You configure a DLQ by specifying a target Amazon Resource Name (ARN) on a Lambda function’s DeadLetterConfig parameter of an Amazon SNS topic or an Amazon SQS queue where you want the event payload delivered, as shown in the following code. For more information about creating an Amazon SNS topic, see Create an SNS Topic. For more information about creating an Amazon SQS queue, see Tutorial: Creating an Amazon SQS Queue.

http://docs.aws.amazon.com/lambda/latest/dg/dlq.html

Writing a cron job microservice with Serverless and AWS Lambda

We recently had a situation where we needed to create a new cron job to fetch all users from our database who are coming to the end of their trial and insert them into our customer.io database. Cron jobs are easy to write, but difficult to setup. You can edit /etc/crontab on the server; if you’re using heroku you can use their Scheduler; or you can use some implementation of cron in your programming language of choice (e.g. Node.js).

The cron job that we needed to write was unrelated to our application code, so whilst we could have put the functionality in there it seemed like the wrong place. Alternatively we could have put the code onto a new server. This would mean provisioning a new box for something that is only going to run once a day for 10 seconds. This seems very wasteful and expensive.

https://blog.readme.io/writing-a-cron-job-microservice-with-serverless-and-aws-lambda/

Serverless at re:Invent 2016 – Wrap-up

The re:Invent 2016 conference was an exciting week to be working on serverless at AWS. We announced new features like support for C# and dead letter queues, and launched new application constructs with Lambda such as Lambda@Edge, AWS Greengrass, Amazon Lex, and AWS Step Functions. In addition we also added support for surfacing services built using API Gateway in the AWS marketplace, expanded the capabilities for custom authorizers, and launched a reference developer portal for managing APIs. Catch up on all the great re:Invent launches here.

In addition to the serverless mini-con with deep dive talks and best practices, we also had deep customer talks by folks from Thomson Reuters, Vevo, Expedia, and FINRA. If you weren’t able to attend the mini-con or missed a specific session, here is a quick link to the entire Serverless Mini Conference Playlist. Other interesting sessions from other tracks are listed below.

Individual Sessions from the Mini Conference

Other Interesting Sessions

https://aws.amazon.com/blogs/compute/serverless-at-reinvent-2016-wrap-up

Five Reasons to Consider Amazon API Gateway for Your Next Microservices Project

API has become an integral part of application design. Architects and developers are spending significant time in designing the API tier. Netflix  — one of the early adopters of polyglot services and APIs — shared some of the advantages of implementing an API layer in their services architecture. Chris Richardson, the founder of the original Cloud Foundry and an expert in microservices, articulated the importance of API Gateway pattern. According to Chris, not only does the API gateway optimize communication between clients and the application, but it also encapsulates the details of the microservices.

Before implementing an API gateway:

image04

After implementing an API gateway:

image03

http://thenewstack.io/five-reasons-to-consider-amazon-api-gateway-for-your-next-microservices-project/

Get Started with Amazon Elasticsearch Service: How Many Data Instances Do I Need?

Welcome to the first in a series of blog posts about Elasticsearch and Amazon Elasticsearch Service, where we will provide the information you need to get started with Elasticsearch on AWS.

How many instances will you need?
When you create an Amazon Elasticsearch Service domain, this is one of the first questions to answer.

https://aws.amazon.com/blogs/database/get-started-with-amazon-elasticsearch-service-how-many-data-instances-do-i-need