My AWS Wishlist for 2017

As a developer working on a 100% serverless application, I find myself wanting more so I can do with less…

Amazon Web Services (AWS) is well known for listening to customer feedback. This has been evident in the features they have delivered for their Serverless platform.

But as a developer working on a 100% serverless application, I find myself wanting more. Unfortunately I can’t fit all my requests into 140 characters. So I decided to write a blog post instead.

https://read.acloud.guru/my-aws-wishlist-for-2017-8c55a7b7b475

New AWS Encryption SDK for Python Simplifies Multiple Master Key Encryption

The AWS Cryptography team is happy to announce a Python implementation of the AWS Encryption SDK. This new SDK helps manage data keys for you, and it simplifies the process of encrypting data under multiple master keys. As a result, this new SDK allows you to focus on the code that drives your business forward. It also provides a framework you can easily extend to ensure that you have a cryptographic library that is configured to match and enforce your standards. The SDK also includes ready-to-use examples. If you are a Java developer, you can refer to this blog post to see specific Java examples for the SDK.

In this blog post, I show you how you can use the AWS Encryption SDK to simplify the process of encrypting data and how to protect your encryption keys in ways that help improve application availability by not tying you to a single region or key management solution.

How does the AWS Encryption SDK help me?

Developers using encryption often face three problems:

  1. How do I correctly generate and use a data key to encrypt data?
  2. How do I protect the data key after it has been used?
  3. How do I store the data key and ciphertext in a portable manner?

The library provided in the AWS Encryption SDK addresses the first problem by implementing the low-level envelope encryption details transparently using the cryptographic provider available in your development environment. The library helps address the second problem by providing intuitive interfaces to let you choose how you want to generate data keys and the master keys or key-encrypting keys that will protect data keys. Developers can then focus on the core of the application they are building instead of on the complexities of encryption. The ciphertext addresses the third problem, as described later in this post.

The AWS Encryption SDK defines a carefully designed and reviewed ciphertext data format that supports multiple secure algorithm combinations (with room for future expansion) and has no limits on the types or algorithms of the master keys. The ciphertext output of clients (created with the SDK) is a single binary blob that contains your encrypted message and one or more copies of the data key, as encrypted by each master key referenced in the encryption request. This single ciphertext data format for envelope-encrypted data makes it easier to ensure the data key has the same durability and availability properties as the encrypted message itself.

The AWS Encryption SDK provides production-ready reference implementations in Java and Python with direct support for key providers such as AWS Key Management Service (KMS). The Java implementation also supports the Java Cryptography Architecture (JCA/JCE) natively, which includes support for AWS CloudHSM and other PKCS #11 devices. The standard ciphertext data format the AWS Encryption SDK defines means that you can use combinations of the Java and Python clients for encryption and decryption as long as they each have access to the key provider that manages the correct master key used to encrypt the data key.

Let’s look at how you can use the AWS Encryption SDK to simplify the process of encrypting data and how to protect your data keys in ways that help improve application availability by not tying you to a single region or key management solution.

https://aws.amazon.com/blogs/security/new-aws-encryption-sdk-for-python-simplifies-multiple-master-key-encryption/

CloudWatch Events Now Supports AWS Step Functions as a Target

The Amazon CloudWatch Events service now supports AWS Step Functions state machines as event targets. Amazon CloudWatch Events enables you to respond quickly to application availability issues or configuration changes that might impact performance or security by notifying you of AWS resource changes in near-real-time. You simply write rules to indicate which events are of interest to your application and what automated action to take when a rule matches an event. You can, for example, invoke AWS Lambda functions or notify an Amazon Simple Notification Service (SNS) topic. Now, you can also send the matching events to an AWS Step Functions state machine to start a workflow responding to the event of interest, such as managing copies of Amazon Elastic Block Store (EBS) snapshots upon snapshot completion.

You may also schedule execution of AWS Step Functions state machines at intervals down to 1-minute to automate processes such as synchronizing S3 buckets nightly.

AWS Step Functions is available in the US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), and Asia Pacific (Tokyo) regions.

Please visit our website for more information on Amazon CloudWatch Events and AWS Step Functions:

https://aws.amazon.com/about-aws/whats-new/2017/03/cloudwatch-events-now-supports-aws-step-functions-as-a-target

Automating AWS Lambda Function Error Handling with AWS Step Functions

AWS Step Functions makes it easy to coordinate the components of distributed applications and microservices using visual workflows. You can scale and modify your applications quickly by building applications from individual components, each of which performs a discrete function.

You can use Step Functions to create state machines, which orchestrate multiple AWS Lambda functions to build multi-step serverless applications. In certain cases, a Lambda function returns an error. Regardless of whether the error is a function exception created by the developer (e.g., file not found), or unpredicted (e.g., out of memory), Step Functions allows you to respond with conditional logic based on the type of error message in the form of function error handling.

https://aws.amazon.com/blogs/compute/automating-aws-lambda-function-error-handling-with-aws-step-functions

Best practices – AWS Lambda function

In this post we are going to go through the best practices while building an AWS Lambda function. In order to go through this blog you should know what is it and you should ideally have built at least a simple function around it so you have your bearings right, you can checkout this blog post of mine where I take you through step by step in building your first AWS Lambda function.

Understanding how AWS Lambda scales

AWS Lambda service is simply, infrastructure which gets allocated to your function on demand as per need. When the need increases new infrastructure is automatically created internally which executes your function. The size of the unit of infrastructure is defined by you when you create the function, AWS allows us to select memory for the function and CPU allocation is directly proportional to the memory that you chose, what this means is that if you choose 128MB of memory you get x CPU while choosing 256MB gives you 2x of the same.

Lambda scales on the basis of unit of work, while unit of work

varies on the Lambda source type. Each unit of work is executed on a

dedicated infrastructure while the said infrastructure is open for

re-use for subsequent calls but not while the current call is

executing, while you pay for only the duration of execution of individual requests proportional to the memory allocation of the function.

Best practices – AWS Lambda function

New – Manage DynamoDB Items Using Time to Live (TTL)

AWS customers are making great use of Amazon DynamoDB. They love the speed and flexibility and build Ad Tech (reference architecture), Gaming (reference architecture), IoT (reference architecture), and other applications that take advantage of the consistent, single-digit millisecond latency. They also love the fact that DynamoDB is a managed, serverless database that scales to handle millions of requests per second to tables that are many terabytes in size.

Many DynamoDB users store data that has a limited useful life or is accessed less frequently over time. Some of them track recent logins, trial subscriptions, or application metrics. Others store data that is subject to regulatory or contractual limitations on how long it can be stored. Until now, these customers implemented their own time-based data management. At scale, this sometimes meant that they ran a couple of Amazon Elastic Compute Cloud (EC2) instances that did nothing more than scan DynamoDB items, check date attributes, and issue delete requests for items that were no longer needed. This added cost and complexity to their application.

New Time to Live (TTL) Management
In order to streamline this popular and important use case, we are launching a new Time to Live (TTL) feature today. You can enable this feature on a table-by-table basis, specifying an item attribute that contains the expiration time for the item.

Once the attribute has been specified and TTL management has been enabled (a single API call takes care of both operations), DynamoDB will find and delete items that have expired. This processing takes place automatically and in the background and does not affect read or write traffic to the table.

You can use DynamoDB streams (see Sneak Preview – DynamoDB Streams for more info) to process or archive the actual deletions. Like other update records in a stream, the deletions are available on a rolling 24-hour basis. You can move the expired items to cold storage, log them, or update other tables using AWS Lambda and DynamoDB Triggers.

Here’s how you enable TTL for a table and specify the desired attribute:

The attribute must be in DynamoDB’s Number data type, and is interpreted as seconds per the Unix Epoch time system.

As you can see from the screen shot above, you can also enable DynamoDB Streams, and you can look at a preview of the items that will be deleted when you enable TTL.

You can also call the UpdateTimeToLive function from your code, or you can use the update-time-to-live command from the AWS Command Line Interface (CLI).

https://aws.amazon.com/pt/blogs/aws/new-manage-dynamodb-items-using-time-to-live-ttl/

Amazon EBS Update – New Elastic Volumes Change Everything

New Elastic Volumes
Today we are launching a new EBS feature we call Elastic Volumes and making it available for all current-generation EBS volumes attached to current-generation EC2 instances. You can now increase volume size, adjust performance, or change the volume type while the volume is in use. You can continue to use your application while the change takes effect.

This new feature will greatly simplify (or even eliminate) many of your planning, tuning, and space management chores. Instead of a traditional provisioning cycle that can take weeks or months, you can make changes to your storage infrastructure instantaneously, with a simple API call.

You can address the following scenarios (and many more that you can come up with on your own) using Elastic Volumes:

Changing Workloads – You set up your infrastructure in a rush and used the General Purpose SSD volumes for your block storage. After gaining some experience you figure out that the Throughput Optimized volumes are a better fit, and simply change the type of the volume.

Spiking Demand – You are running a relational database on a Provisioned IOPS volume that is set to handle a moderate amount of traffic during the month, with a 10x spike in traffic  during the final three days of each month due to month-end processing.  You can use Elastic Volumes to dial up the provisioning in order to handle the spike, and then dial it down afterward.

Increasing Storage – You provisioned a volume for 100 GiB and an alarm goes off indicating that it is now at 90% of capacity. You increase the size of the volume and expand the file system to match, with no downtime, and in a fully automated fashion.

https://aws.amazon.com/blogs/aws/amazon-ebs-update-new-elastic-volumes-change-everything/