• Learn how Harry’s uses AWS to simplify infrastructure operations
• Learn how AWS Fargate makes it simple to run containers
• Learn how to use AWS services like CloudFormation, Auto Scaling, and Load Balancing with AWS Fargate
Many organizations today are using containers to package source code and dependencies into lightweight, immutable artifacts that can be deployed reliably to any environment.
Kubernetes (K8s) is an open-source framework for automated scheduling and management of containerized workloads. In addition to master nodes, a K8s cluster is made up of worker nodes where containers are scheduled and run.
Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed service that removes the need to manage the installation, scaling, or administration of master nodes and the etcd distributed key-value store. It provides a highly available and secure K8s control plane.
This post demonstrates how to use Spot Instances as K8s worker nodes, and shows the areas of provisioning, automatic scaling, and handling interruptions (termination) of K8s worker nodes across your cluster.
This post focuses primarily on EC2 instance scaling. This post also assumes a default interruption mode of terminate for EC2 instances, though there are other interruption types, stop and hibernate. For stateless K8s sessions, I recommend choosing the interruption mode of terminate.
Private registry authentication support for Amazon Elastic Container Service (Amazon ECS) is now available with the AWS Fargate launch type! Now, in addition to Amazon Elastic Container Registry (Amazon ECR), you can use any private registry or repository of your choice for both EC2 and Fargate launch types.
For ECS to pull from a private repository, it needs a secret in AWS Secrets Manager with your registry credentials, an ECS task execution IAM role in AWS Identity Access Management (IAM) with a policy granting access to the secret, and a task with the secret and task execution IAM role ARNs in the task definition.
Whether you are new to the the cloud and AWS or an experienced cloud developer, this guide is designed to help you get started with Docker containers on AWS ECS and AWS Fargate quickly and easily.
If you are brand new to the cloud or containers you should first read the introduction to cloud and container concepts.
If you already feel familiar with Docker containers, and just want to deploy your containerized application quickly and reliably head to the architecture patterns section to find a collection of infrastructure as code examples for popular application architectures. You can either deploy the templates onto your own AWS account in a few clicks, or download them to customize or use as a reference for developing your own application template.
In this blog post, I’ll walk you through the steps for setting up continuous replication of an AWS CodeCommit repository from one AWS region to another AWS region using a serverless architecture. CodeCommit is a fully-managed, highly scalable source control service that stores anything from source code to binaries. It works seamlessly with your existing Git tools and eliminates the need to operate your own source control system. Replicating an AWS CodeCommit repository from one AWS region to another AWS region enables you to achieve lower latency pulls for global developers. This same approach can also be used to automatically back up repositories currently hosted on other services (for example, GitHub or BitBucket) to AWS CodeCommit.
This solution uses AWS Lambda and AWS Fargate for continuous replication. Benefits of this approach include:
Note: AWS Fargate has a limitation of 10 GB for storage and is available in US East (N. Virginia) region. A similar solution that uses Amazon EC2 instances to replicate the repositories on a schedule was published in a previous blog and can be used if your repository does not meet these conditions.
This article walks through the process of building a chat application, containerizing it, and deploying it using AWS Fargate. The result of following along with this guide will be a working URL hosting a public, realtime chat web app. But all this will be accomplished without needing to have a single EC2 instance on your AWS account!
If you want to follow along with the article and build and deploy this application yourself you need to make sure that you have the following things:
Once you have these resources ready you can get started.
AWS dropped so many serverless announcements at re:Invent, the community is still scrambling to make sense of them all. This post is all about AWS Fargate.
In this article, I will show you how to create an end-to-end serverless application that extracts thumbnails from video files. But, oh no, processing video files is a long-running process! Whatever will we do?
This is where Fargate comes in.
TL;DR A Docker container does the processing -> The container extracts the thumbnail and uploads the image to an S3 bucket -> The container is managed by AWS Fargate. All functionality is triggered from AWS Lambda functions and contained within a serverless application written with the Serverless Framework.
https://serverless.com/blog/serverless-application-for-long-running-process-fargate-lambda/