Riding the Tiger: Lessons Learned Implementing Istio

Recently I (along with a few others much smarter than me) had occasion to implement a ‘real’ production system with Istio, running on a managed cloud-provided Kubernetes service.

“My next ulcer will be called ‘istio'” by Ian Miell

Istio has a reputation for being difficult to build with and administer, but I haven’t read many war stories about trying to make it work, so I thought it might be useful to actually write about what it’s like in the trenches for a ‘typical’ team trying to implement this stuff. The intention is very much not to bury Istio, but to praise it (it does so much that is useful/needed for ‘real’ Kubernetes clusters – skip to the end if impatient) while warning those about to step into the breach what comes if you’re not prepared…

Run your Kubernetes Workloads on Amazon EC2 Spot Instances with Amazon EKS

Many organizations today are using containers to package source code and dependencies into lightweight, immutable artifacts that can be deployed reliably to any environment.

Kubernetes (K8s) is an open-source framework for automated scheduling and management of containerized workloads. In addition to master nodes, a K8s cluster is made up of worker nodes where containers are scheduled and run.

Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed service that removes the need to manage the installation, scaling, or administration of master nodes and the etcd distributed key-value store. It provides a highly available and secure K8s control plane.

This post demonstrates how to use Spot Instances as K8s worker nodes, and shows the areas of provisioning, automatic scaling, and handling interruptions (termination) of K8s worker nodes across your cluster.

What this post does not cover

This post focuses primarily on EC2 instance scaling. This post also assumes a default interruption mode of terminate for EC2 instances, though there are other interruption types, stop and hibernate. For stateless K8s sessions, I recommend choosing the interruption mode of terminate.

https://aws.amazon.com/blogs/compute/run-your-kubernetes-workloads-on-amazon-ec2-spot-instances-with-amazon-eks/

AWS Workshop for Kubernetes

This is a self-paced workshop designed for Development and Operations teams who would like to leverage Kubernetes on Amazon Web Services (AWS).

This workshop provides instructions to create, manage, and scale a Kubernetes cluster on AWS, as well as how to deploy applications, scale them, run stateless and stateful containers, perform service discovery between different microservices, and other similar concepts.

It also shows deep integration with several AWS technologies.

We recommend at least 2 hours to complete the workshop.

Simple deployment to Amazon EKS

Recently, Amazon announced that Elastic Container Service for Kubernetes (EKS) is generally available. Previously only open to a few folks via request access, EKS is now available to everyone, allowing all users to get managed Kubernetes clusters on AWS. In light of this development, we’re excited to announce official support for EKS with GitLab.

GitLab is designed for Kubernetes. While you can use GitLab to deploy anywhere, from bare metal to VMs, when you deploy to Kubernetes, you get access to the most powerful features. In this post, I’ll walk through our Kubernetes integration, highlight a few key features, and discuss how you can use the integration with EKS.

https://about.gitlab.com/2018/06/06/eks-gitlab-integration/

EKS vs. ECS: orchestrating containers on AWS

AWS announced Kubernetes-as-a-Service at re:Invent in November 2017: Elastic Container Service for Kubernetes (EKS). Since yesterday, EKS is generally available. I discussed ECS vs. Kubernetesbefore EKS was a thing. Therefore, I’d like to take a second attempt and compare EKS with ECS.

Before comparing the differences, let us start with what EKS and ECS have in common. Both solutions are managing containers distributed among a fleet of virtual machines. Managing containers includes:

  • Monitoring and replacing failed containers.
  • Deploying new versions of your containers.
  • Scaling the number of containers based on load.

What are the differences between EKS and ECS?

https://cloudonaut.io/eks-vs-ecs-orchestrating-containers-on-aws/

AWS Workshop for Kubernetes

This is a self-paced workshop designed for Development and Operations teams who would like to leverage Kubernetes on Amazon Web Services (AWS).

This workshop provides instructions to create, manage, and scale a Kubernetes cluster on AWS, as well as how to deploy applications, scale them, run stateless and stateful containers, perform service discovery between different microservices, and other similar concepts.

It also shows deep integration with several AWS technologies.

We recommend at least 2 hours to complete the workshop.

https://github.com/aws-samples/aws-workshop-for-kubernetes

How we improved Kubernetes Dashboard UI in 1.4 for your production needs​

With the release of Kubernetes 1.4 last week, Dashboard – the official web UI for Kubernetes – has a number of exciting updates and improvements of its own. The past three months have been busy ones for the Dashboard team, and we’re excited to share the resulting features of that effort here. If you’re not familiar with Dashboard, the GitHub repo is a great place to get started.

A quick recap before unwrapping our shiny new features: Dashboard was initially released March 2016. One of the focuses for Dashboard throughout its lifetime has been the onboarding experience; it’s a less intimidating way for Kubernetes newcomers to get started, and by showing multiple resources at once, it provides contextualization lacking in kubectl (the CLI). After that initial release though, the product team realized that fine-tuning for a beginner audience was getting ahead of ourselves: there were still fundamental product requirements that Dashboard needed to satisfy in order to have a productive UX to onboard new users too. That became our mission for this release: closing the gap between Dashboard and kubectl by showing more resources, leveraging a web UI’s strengths in monitoring and troubleshooting, and architecting this all in a user friendly way.

http://blog.kubernetes.io/2016/10/Production-Kubernetes-Dashboard-UI-1.4-improvements_3.html

Kubernetes 1.4: Making it easy to run on Kubernetes anywhere

What’s new:

Cluster creation with two commands – To get started with Kubernetes a user must provision nodes, install Kubernetes and bootstrap the cluster. A common request from users is to have an easy, portable way to do this on any cloud (public, private, or bare metal).

  • Kubernetes 1.4 introduces ‘kubeadm’ which reduces bootstrapping to two commands, with no complex scripts involved. Once kubernetes is installed, kubeadm init starts the master while kubeadm join joins the nodes to the cluster.
  • Installation is also streamlined by packaging Kubernetes with its dependencies, for most major Linux distributions including Red Hat and Ubuntu Xenial. This means users can now install Kubernetes using familiar tools such as apt-get and yum.
  • Add-on deployments, such as for an overlay network, can be reduced to one command by using a DaemonSet.
  • Enabling this simplicity is a new certificates API and its use for kubelet TLS bootstrap, as well as a new discovery API.

Expanded stateful application support – While cloud-native applications are built to run in containers, many existing applications need additional features to make it easy to adopt containers. Most commonly, these include stateful applications such as batch processing, databases and key-value stores. In Kubernetes 1.4, we have introduced a number of features simplifying the deployment of such applications, including:

  • ScheduledJob is introduced as Alpha so users can run batch jobs at regular intervals.
  • Init-containers are Beta, addressing the need to run one or more containers before starting the main application, for example to sequence dependencies when starting a database or multi-tier app.
  • Dynamic PVC Provisioning moved to Beta. This feature now enables cluster administrators to expose multiple storage provisioners and allows users to select them using a new Storage Class API object.
  • Curated and pre-tested Helm charts for common stateful applications such as MariaDB, MySQL and Jenkins will be available for one-command launches using version 2 of the Helm Package Manager.

Cluster federation API additions – One of the most requested capabilities from our global customers has been the ability to build applications with clusters that span regions and clouds.

  • Federated Replica Sets Beta – replicas can now span some or all clusters enabling cross region or cross cloud replication. The total federated replica count and relative cluster weights / replica counts are continually reconciled by a federated replica-set controller to ensure you have the pods you need in each region / cloud.
  • Federated Services are now Beta, and secrets, events and namespaces have also been added to the federation API.
  • Federated Ingress Alpha – starting with Google Cloud Platform (GCP), users can create a single L7 globally load balanced VIP that spans services deployed across a federation of clusters within GCP. With Federated Ingress in GCP, external clients point to a single IP address and are sent to the closest cluster with usable capacity in any region or zone of the federation in GCP.

Container security support – Administrators of multi-tenant clusters require the ability to provide varying sets of permissions among tenants, infrastructure components, and end users of the system.

  • Pod Security Policy is a new object that enables cluster administrators to control the creation and validation of security contexts for pods/containers. Admins can associate service accounts, groups, and users with a set of constraints to define a security context.
  • AppArmor support is added, enabling admins to run a more secure deployment, and provide better auditing and monitoring of their systems. Users can configure a container to run in an AppArmor profile by setting a single field.

Infrastructure enhancements – We continue adding to the scheduler, storage and client capabilities in Kubernetes based on user and ecosystem needs.

  • Scheduler – introducing inter-pod affinity and anti-affinity Alpha for users who want to customize how Kubernetes co-locates or spreads their pods. Also priority scheduling capability for cluster add-ons such as DNS, Heapster, and the Kube Dashboard.
  • Disruption SLOs – Pod Disruption Budget is introduced to limit impact of pods deleted by cluster management operations (such as node upgrade) at any one time.
  • Storage – New volume plugins for Quobyte and Azure Data Disk have been added.
  • Clients – Swagger 2.0 support is added, enabling non-Go clients.

Kubernetes Dashboard UI – lastly, a great looking Kubernetes Dashboard UI with 90% CLI parity for at-a-glance management.

For a complete list of updates see the release notes on GitHub. Apart from features the most impressive aspect of Kubernetes development is the community of contributors. This is particularly true of the 1.4 release, the full breadth of which will unfold in upcoming weeks.

http://blog.kubernetes.io/2016/09/kubernetes-1.4-making-it-easy-to-run-on-kuberentes-anywhere.html