• Learn how Harry’s uses AWS to simplify infrastructure operations
• Learn how AWS Fargate makes it simple to run containers
• Learn how to use AWS services like CloudFormation, Auto Scaling, and Load Balancing with AWS Fargate
Many organizations today are using containers to package source code and dependencies into lightweight, immutable artifacts that can be deployed reliably to any environment.
Kubernetes (K8s) is an open-source framework for automated scheduling and management of containerized workloads. In addition to master nodes, a K8s cluster is made up of worker nodes where containers are scheduled and run.
Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed service that removes the need to manage the installation, scaling, or administration of master nodes and the etcd distributed key-value store. It provides a highly available and secure K8s control plane.
This post demonstrates how to use Spot Instances as K8s worker nodes, and shows the areas of provisioning, automatic scaling, and handling interruptions (termination) of K8s worker nodes across your cluster.
This post focuses primarily on EC2 instance scaling. This post also assumes a default interruption mode of terminate for EC2 instances, though there are other interruption types, stop and hibernate. For stateless K8s sessions, I recommend choosing the interruption mode of terminate.
Whether you are new to the the cloud and AWS or an experienced cloud developer, this guide is designed to help you get started with Docker containers on AWS ECS and AWS Fargate quickly and easily.
If you are brand new to the cloud or containers you should first read the introduction to cloud and container concepts.
If you already feel familiar with Docker containers, and just want to deploy your containerized application quickly and reliably head to the architecture patterns section to find a collection of infrastructure as code examples for popular application architectures. You can either deploy the templates onto your own AWS account in a few clicks, or download them to customize or use as a reference for developing your own application template.
This is a self-paced workshop designed for Development and Operations teams who would like to leverage Kubernetes on Amazon Web Services (AWS).
This workshop provides instructions to create, manage, and scale a Kubernetes cluster on AWS, as well as how to deploy applications, scale them, run stateless and stateful containers, perform service discovery between different microservices, and other similar concepts.
It also shows deep integration with several AWS technologies.
We recommend at least 2 hours to complete the workshop.
Recently, Amazon announced that Elastic Container Service for Kubernetes (EKS) is generally available. Previously only open to a few folks via request access, EKS is now available to everyone, allowing all users to get managed Kubernetes clusters on AWS. In light of this development, we’re excited to announce official support for EKS with GitLab.
GitLab is designed for Kubernetes. While you can use GitLab to deploy anywhere, from bare metal to VMs, when you deploy to Kubernetes, you get access to the most powerful features. In this post, I’ll walk through our Kubernetes integration, highlight a few key features, and discuss how you can use the integration with EKS.
AWS announced Kubernetes-as-a-Service at re:Invent in November 2017: Elastic Container Service for Kubernetes (EKS). Since yesterday, EKS is generally available. I discussed ECS vs. Kubernetesbefore EKS was a thing. Therefore, I’d like to take a second attempt and compare EKS with ECS.
Before comparing the differences, let us start with what EKS and ECS have in common. Both solutions are managing containers distributed among a fleet of virtual machines. Managing containers includes:
What are the differences between EKS and ECS?
So I have been spending some time jamming my hands into AWS Lambda’s greasy internals, and I’d like to share all the wonderful details I’ve discovered.
I’ve use AWS Lambda quite extensively at work. And I wanted to get a better understanding of its inner working. What prompted this, you might ask?
There was an off handed comment by the author about the “Lambda API being a bit more complex.”
Well I aim to find out just how complex it is, with the end goal of writing a custom runtime, similar to the one above.
Probably in Python, just because it’s quick to prototype with.
Lets get started, shall we?
For the impatient of you, if you just want to see the results, feel free to look at the code here.