Amazon DynamoDB Accelerator (DAX) – In-Memory Caching for Read-Intensive Workloads

I’m fairly sure that you already know about Amazon DynamoDB. As you probably know, it is a managed NoSQL database that scales to accommodate as much table space, read capacity, and write capacity as you need. With response times measured in single-digit milliseconds, our customers are using DynamoDB for many types of applications including adtech, IoT, gaming, media, online learning, travel, e-commerce, and finance. Some of these customers store more than 100 terabytes in a single DynamoDB table and make millions of read or write requests per second. The Amazon retail site relies on DynamoDB and uses it to withstand the traffic surges associated with brief, high-intensity events such as Black Friday, Cyber Monday, and Prime Day.

While DynamoDB’s ability to deliver fast, consistent performance benefits just about any application and workload, there’s always room to do even better. The business value of some workloads (gaming and adtech come to mind, but there are many others) is driven by low-latency, high-performance database reads. The ability to pull data from DynamoDB as quickly as possible leads to faster & more responsive games or ads that drive the highest click-through rates.

 https://aws.amazon.com/blogs/aws/amazon-dynamodb-accelerator-dax-in-memory-caching-for-read-intensive-workloads

My AWS Wishlist for 2017

As a developer working on a 100% serverless application, I find myself wanting more so I can do with less…

Amazon Web Services (AWS) is well known for listening to customer feedback. This has been evident in the features they have delivered for their Serverless platform.

But as a developer working on a 100% serverless application, I find myself wanting more. Unfortunately I can’t fit all my requests into 140 characters. So I decided to write a blog post instead.

https://read.acloud.guru/my-aws-wishlist-for-2017-8c55a7b7b475

New – Manage DynamoDB Items Using Time to Live (TTL)

AWS customers are making great use of Amazon DynamoDB. They love the speed and flexibility and build Ad Tech (reference architecture), Gaming (reference architecture), IoT (reference architecture), and other applications that take advantage of the consistent, single-digit millisecond latency. They also love the fact that DynamoDB is a managed, serverless database that scales to handle millions of requests per second to tables that are many terabytes in size.

Many DynamoDB users store data that has a limited useful life or is accessed less frequently over time. Some of them track recent logins, trial subscriptions, or application metrics. Others store data that is subject to regulatory or contractual limitations on how long it can be stored. Until now, these customers implemented their own time-based data management. At scale, this sometimes meant that they ran a couple of Amazon Elastic Compute Cloud (EC2) instances that did nothing more than scan DynamoDB items, check date attributes, and issue delete requests for items that were no longer needed. This added cost and complexity to their application.

New Time to Live (TTL) Management
In order to streamline this popular and important use case, we are launching a new Time to Live (TTL) feature today. You can enable this feature on a table-by-table basis, specifying an item attribute that contains the expiration time for the item.

Once the attribute has been specified and TTL management has been enabled (a single API call takes care of both operations), DynamoDB will find and delete items that have expired. This processing takes place automatically and in the background and does not affect read or write traffic to the table.

You can use DynamoDB streams (see Sneak Preview – DynamoDB Streams for more info) to process or archive the actual deletions. Like other update records in a stream, the deletions are available on a rolling 24-hour basis. You can move the expired items to cold storage, log them, or update other tables using AWS Lambda and DynamoDB Triggers.

Here’s how you enable TTL for a table and specify the desired attribute:

The attribute must be in DynamoDB’s Number data type, and is interpreted as seconds per the Unix Epoch time system.

As you can see from the screen shot above, you can also enable DynamoDB Streams, and you can look at a preview of the items that will be deleted when you enable TTL.

You can also call the UpdateTimeToLive function from your code, or you can use the update-time-to-live command from the AWS Command Line Interface (CLI).

https://aws.amazon.com/pt/blogs/aws/new-manage-dynamodb-items-using-time-to-live-ttl/

Amazon Web Services in Plain English

Hey, have you heard of the new AWS services: ContainerCache, ElastiCast and QR72? Of course not, I just made those up.

But with 50 plus opaquely named services, we decided that enough was enough and that some plain english descriptions were needed.

https://www.expeditedssl.com/aws-in-plain-english

Introducing the ‘Startup Kit Serverless Workload’

“What’s the easiest way to get started on AWS?” is a common question. Although there are many well established paths to getting started, including using AWS Elastic Beanstalk, serverless computing is a rapidly growing alternative.

Serverless computing allows you to build and run applications and services without thinking about servers. On AWS, the AWS Lambda service is the central building block for serverless computing. AWS also provides several other services to support serverless architectures. These include Amazon API Gateway, which you can use with Lambda to create a RESTful API, and Amazon DynamoDB, a NoSQL cloud database service that frees you from the burden of setting up a database cluster.

A completely serverless architecture is shown in the following diagram.

 

serverless-arch

https://aws.amazon.com/blogs/startups/introducing-the-startup-kit-serverless-workload/

Going Serverless: AWS and Compelling Science Fiction

This is a companion blog post to a talk I gave to the Boulder Python Meetup group about the infrastructure that runs Compelling Science Fiction. Slides from that talk can be found here.Hopefully you can use some of these tools to create something new as well!

Compelling Science Fiction is run entirely on extremely inexpensive Amazon Web Services (AWS). There are currently three primary use cases that I have:

  1. Serving web pages that contain the site. This is easily achieved by using the Amazon S3 feature that allows you to serve static web pages from an S3 bucket.
  2. Accepting and managing submissions from authors.
  3. Reading through the queue (“slush”) of stories that authors submit.

It’s the last two items on that list that I’ll be talking about today, because they both use the same basic infrastructure. That infrastructure is diagrammed below:As you can see, I use four different Amazon Web Services: the Simple Email Service (SES), Simple Storage Service (S3), Lambda, and DynamoDB. I’ll touch on all of the ways we use these services, but AWS Lambda is the most important, because it allows us to glue together all the services with Python without provisioning any servers.

http://compellingsciencefiction.com/blog/2016-11-10.html