How Harry’s Shaved Off Their Operational Overhead by Moving to AWS Fargate

Broadcast Date: July 25, 2018
Level 200 | Customer Showcase
Harry’s migrated their messaging workload to AWS Fargate to eliminate the responsibility of maintenance of their compute infrastructure. In this tech talk, learn how Harry’s used the seamless integration of Fargate with other AWS services such as CloudFormation, Auto Scaling, and ALB to manage their traffic spikes and reduce their message processing time by more than 75%.
Learning Objectives:
• Learn how Harry’s uses AWS to simplify infrastructure operations
• Learn how AWS Fargate makes it simple to run containers
• Learn how to use AWS services like CloudFormation, Auto Scaling, and Load Balancing with AWS Fargate

Run your Kubernetes Workloads on Amazon EC2 Spot Instances with Amazon EKS

Many organizations today are using containers to package source code and dependencies into lightweight, immutable artifacts that can be deployed reliably to any environment.

Kubernetes (K8s) is an open-source framework for automated scheduling and management of containerized workloads. In addition to master nodes, a K8s cluster is made up of worker nodes where containers are scheduled and run.

Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed service that removes the need to manage the installation, scaling, or administration of master nodes and the etcd distributed key-value store. It provides a highly available and secure K8s control plane.

This post demonstrates how to use Spot Instances as K8s worker nodes, and shows the areas of provisioning, automatic scaling, and handling interruptions (termination) of K8s worker nodes across your cluster.

What this post does not cover

This post focuses primarily on EC2 instance scaling. This post also assumes a default interruption mode of terminate for EC2 instances, though there are other interruption types, stop and hibernate. For stateless K8s sessions, I recommend choosing the interruption mode of terminate.

https://aws.amazon.com/blogs/compute/run-your-kubernetes-workloads-on-amazon-ec2-spot-instances-with-amazon-eks/

Introducing private registry authentication support for AWS Fargate

Private registry authentication support for Amazon Elastic Container Service (Amazon ECS) is now available with the AWS Fargate launch type! Now, in addition to Amazon Elastic Container Registry (Amazon ECR), you can use any private registry or repository of your choice for both EC2 and Fargate launch types.

For ECS to pull from a private repository, it needs a secret in AWS Secrets Manager with your registry credentials, an ECS task execution IAM role in AWS Identity Access Management (IAM) with a policy granting access to the secret, and a task with the secret and task execution IAM role ARNs in the task definition.

https://aws.amazon.com/blogs/compute/introducing-private-registry-authentication-support-for-aws-fargate/

Running Containers on AWS using AWS ECS and AWS Fargate

infrastructure as code

Whether you are new to the the cloud and AWS or an experienced cloud developer, this guide is designed to help you get started with Docker containers on AWS ECS and AWS Fargate quickly and easily.

If you are brand new to the cloud or containers you should first read the introduction to cloud and container concepts.

If you already feel familiar with Docker containers, and just want to deploy your containerized application quickly and reliably head to the architecture patterns section to find a collection of infrastructure as code examples for popular application architectures. You can either deploy the templates onto your own AWS account in a few clicks, or download them to customize or use as a reference for developing your own application template.

https://containersonaws.com/

Replicate AWS CodeCommit Repositories between Regions using AWS Fargate

In this blog post, I’ll walk you through the steps for setting up continuous replication of an AWS CodeCommit repository from one AWS region to another AWS region using a serverless architecture. CodeCommit is a fully-managed, highly scalable source control service that stores anything from source code to binaries. It works seamlessly with your existing Git tools and eliminates the need to operate your own source control system. Replicating an AWS CodeCommit repository from one AWS region to another AWS region enables you to achieve lower latency pulls for global developers. This same approach can also be used to automatically back up repositories currently hosted on other services (for example, GitHub or BitBucket) to AWS CodeCommit.

This solution uses AWS Lambda and AWS Fargate for continuous replication. Benefits of this approach include:

  • The replication process can be easily setup to trigger based on events, such as commits made to the repository.
  • Setting up a serverless architecture means you don’t need to provision, maintain, or administer servers.
  • You can incorporate this solution into your own DevOps pipeline. For more information, see the blog Invoke an AWS Lambda function in a pipeline in AWS CodePipeline.

Note: AWS Fargate has a limitation of 10 GB for storage and is available in US East (N. Virginia) region. A similar solution that uses Amazon EC2 instances to replicate the repositories on a schedule was published in a previous blog and can be used if your repository does not meet these conditions.

https://aws.amazon.com/pt/blogs/devops/replicate-aws-codecommit-repository-between-regions-using-aws-fargate/

Building a Socket.io chat app and deploying it using AWS Fargate

This article walks through the process of building a chat application, containerizing it, and deploying it using AWS Fargate. The result of following along with this guide will be a working URL hosting a public, realtime chat web app. But all this will be accomplished without needing to have a single EC2 instance on your AWS account!

If you want to follow along with the article and build and deploy this application yourself you need to make sure that you have the following things:

  • Node.js (The runtime language of the chat app we are building)
  • Docker (The tool we will use for packaging the app up for deployment)
  • An AWS account, and the AWS CLI (We will deploy the application on AWS)

Once you have these resources ready you can get started.

https://medium.com/containers-on-aws/building-a-socket-io-chat-app-and-deploying-it-using-aws-fargate-86fd7cbce13f

How to use AWS Fargate and Lambda for long-running processes in a Serverless app

AWS dropped so many serverless announcements at re:Invent, the community is still scrambling to make sense of them all. This post is all about AWS Fargate.

In this article, I will show you how to create an end-to-end serverless application that extracts thumbnails from video files. But, oh no, processing video files is a long-running process! Whatever will we do?

This is where Fargate comes in.

TL;DR A Docker container does the processing -> The container extracts the thumbnail and uploads the image to an S3 bucket -> The container is managed by AWS Fargate. All functionality is triggered from AWS Lambda functions and contained within a serverless application written with the Serverless Framework.

https://serverless.com/blog/serverless-application-for-long-running-process-fargate-lambda/