Each day, millions of people move around the world with Uber. The associated financial transactions are stored in Uber’s Ledger Store with provable completeness backed by Amazon DynamoDB. In this session, we discuss why provable completeness is key for compliant storage of financial and other ledger-like use cases, and we explain how this can be implemented at global scale by using DynamoDB.
Verizon Media had to create a better, stronger, and faster push notification system to serve the requirements of iconic Verizon brands, fulfill push notification completion time of 27 million devices in under three minutes, and consistently show the push “toast” on all users’ lock screens. Verizon decided to use Amazon DynamoDB and other AWS services such as Amazon ElastiCache and Amazon SQS in conjunction with its own Vespa search engine to power all the use cases of its brands. It also uses Kubernetes to orchestrate microservices across many Amazon EC2 instances.
AWS Step Functions now supports workflow execution events, which make it faster and easier to build and monitor event-driven, serverless workflows. Execution event notifications can be automatically delivered when a workflow starts or completes through CloudWatch Events, reaching targets like AWS Lambda, Amazon SNS, Amazon Kinesis, or AWS Step Functions for automated response to the event.
You can enable CloudWatch Event notifications for Step Functions in minutes. Customers simply have to identify the state machine, what event changes are of interest, and the targets for the notification.
AWS Step Functions allows you to add resilient workflow automation to your applications. The steps of your workflow can exist anywhere, including in AWS Lambda functions, on Amazon EC2, or on-premises. AWS Step Functions is also integrated with Amazon ECS, AWS Fargate, Amazon DynamoDB, Amazon SNS, Amazon SQS, AWS Batch, AWS Glue, and Amazon SageMaker. You can connect and coordinate all these services in minutes—without writing code
To learn more:
AWS just announced the release of S3 Batch Operations. This is a hotly-anticpated release that was originally announced at re:Invent 2018. With S3 Batch, you can run tasks on existing S3 objects. This will make it much easier to run previously difficult tasks like retagging S3 objects, copying objects to another bucket, or processing large numbers of objects in bulk.
In this post, we’ll do a deep dive into S3 Batch. You will learn when, why, and how to use S3 Batch. First, we’ll do an overview of the key elements involved in an S3 Batch job. Then, we’ll walkthrough an example by doing sentiment analysis on a group of existing objects with AWS Lambda and Amazon Comprehend.
I have tried a few different ways of reporting Lambda errors to Slack, but haven’t found a reusable solution that gave all of the information I desired. I decided to solve that problem by creating my own Lambda layer. This solution doesn’t highlight the use of error logging, but is dynamic enough that you can just pass an error message into the layer.
For this to be useful to you, make sure you are familiar with the following:
1. AWS Lambda
2. Node JS
AWS introduced Lambda Layers at re:invent 2018 as a way to share code and data between functions within and across different accounts. It’s a useful tool and something many AWS customers have been asking for. However, since we already have numerous ways of sharing code, including package managers such as NPM, when should we use Layers instead?
In this post, we will look at how Lambda Layers works, the problem it solves and the new challenges it introduces. And we will finish off with some recommendations on when to use it.