A Perl toolchain for building micro-services at scale

But why?!

This is the question that we get asked immediately when we tell someone that we use Perl. Our extensive use of Perl to build many of our internal services often comes as a surprise to many and we can understand why. Perl is a dinosaur among mainstream programming languages. It lacks the glamour that other, relatively younger languages have. There is also a common misconception in the programming world that modern software engineering practices cannot be followed with a language like Perl. In this post, we hope to debunk that myth. We want to give you a glimpse of the developer experience (DX) here at Semantics3 where we write a lot of Perl code but still manage to employ the latest engineering best-practices. We would like to highlight that we are able to do so with the help of a tool-chain written entirely in Perl.

https://engineering.semantics3.com/2016/06/15/a-perl-toolchain-for-building-micro-services-at-scale/

Advertisements

Serverless Architectures

Serverless is a hot topic in the software architecture world. We’re already seeing booksopen source frameworks, plenty of vendor products, and even a conference dedicated to the subject. But what is Serverless and why is (or isn’t) it worth considering? Through thisevolving publication I hope to enlighten you a little on these questions.

To start we’ll look at the ‘what’ of Serverless where I try to remain as neutral as I can about the benefits and drawbacks of the approach – we’ll look at those topics later.


What is Serverless?

Like many trends in software there’s no one clear view of what ‘Serverless’ is, and that isn’t helped by it really coming to mean two different but overlapping areas:

  1. Serverless was first used to describe applications that significantly or fully depend on 3rd party applications / services (‘in the cloud’) to manage server-side logic and state. These are typically ‘rich client’ applications (think single page web apps, or mobile apps) that use the vast ecosystem of cloud accessible databases (like Parse, Firebase), authentication services (Auth0, AWS Cognito), etc. These types of services have been previously described as ‘(Mobile) Backend as a Service’, and I’ll be using‘BaaS’ as a shorthand in the rest of this article.
  2. Serverless can also mean applications where some amount of server-side logic is still written by the application developer but unlike traditional architectures is run in stateless compute containers that are event-triggered, ephemeral (may only last for one invocation), and fully managed by a 3rd party. (Thanks to ThoughtWorks for their definition in their most recent Tech Radar.) One way to think of this is ‘Functions as a service / FaaS’ . AWS Lambda is one of the most popular implementations of FaaS at present, but there are others. I’ll be using ‘FaaS’ as a shorthand for this meaning of Serverless throughout the rest of this article.

Mostly I’m going to talk about the second of these areas because it is the one that is newer, has significant differences to how we typically think about technical architecture, and has been driving a lot of the hype around Serverless.

However these concepts are related and, in fact, converging. A good example is Auth0 – they started initially with BaaS ‘Authentication as a Service’, but with Auth0 Webtask they are entering the FaaS space.

Furthermore in many cases when developing a ‘BaaS shaped’ application, especially when developing a ‘rich’ web-based app as opposed to a mobile app, you’ll likely still need some amount of custom server side functionality. FaaS functions may be a good solution for this, especially if they are integrated to some extent with the BaaS services you’re using. Examples of such functionality include data validation (protecting against imposter clients) and compute-intensive processing (e.g. image or video manipulation.)

http://martinfowler.com/articles/serverless.html

Using Docker Swarm to Create an Overlay Network

In a previous article, we discussed Docker Machine, a tool to create Docker hosts in the cloud. Docker Machine can be extremely handy for local testing if you are on Windows and OS X, but it also adds another dimension when you use it to start Docker hosts in your favorite cloud provider and/or create a cluster.

We then used Machine to go straight into an advanced subject and create a Docker Swarm, which is a cluster of Docker hosts. A cluster of Docker hosts is needed to run a truly distributed application in production. In this article, we will look at setting up networking for a Swarm to allow containers to communicate with each other across hosts. Indeed, a Swarm cluster allows us to use the native single host networking of Docker, but it also allows us to create a network overlay backed by VXLAN. Containers started in this overlay can communicate out of the box with each other. This article will show you how to create, use, and test an overlay network using Docker Swarm.

https://www.linux.com/learn/using-docker-swarm-create-overlay-network

Learn Docker by building a Microservice

If you are looking to get your hands dirty and learn all about Docker, then look no further!

In this article I’m going to show you how Docker works, what all the fuss is about, and how Docker can help with a basic development task – building a microservice.

We’ll use a simple Node.js service with a MySQL backend as an example, going from code running locally to containers running a microservice and database.

http://www.dwmkerr.com/learn-docker-by-building-a-microservice/

Kel – Kubernetes-based PaaS built in Python and Go

Kel™ is an open-source Platform as a Service (PaaS) from Eldarion® that makes it easy to manage web application deployment and hosting through the entire software lifecycle.

Kel helps DevOps professionals manage their application infrastructure through a layer of tools and components that make Kubernetes accessible and easier to use. Kel builds on Eldarion’s 7+ years experience running Gondor, one of the leading Python / Django PaaS solutions.

http://www.kelproject.com/

A few common questions about LXD

At its simplest, LXD is a daemon which provides a REST API to drive LXC containers.

Its main goal is to provide a user experience that’s similar to that of virtual machines but using Linux containers rather than hardware virtualization.

 

How does LXD relate to Docker/Rkt?

This is by far the question we get the most, so lets address it immediately!

LXD focuses on system containers, also called infrastructure containers. That is, a LXD container runs a full Linux system, exactly as it would be when run on metal or in a VM.

Those containers will typically be long running and based on a clean distribution image. Traditional configuration management tools and deployment tools can be used with LXD containers exactly as you would use them for a VM, cloud instance or physical machine.

In contrast, Docker focuses on ephemeral, stateless, minimal containers that won’t typically get upgraded or re-configured but instead just be replaced entirely. That makes Docker and similar projects much closer to a software distribution mechanism than a machine management tool.

The two models aren’t mutually exclusive either. You can absolutely use LXD to provide full Linux systems to your users who can then install Docker inside their LXD container to run the software they want.

https://www.stgraber.org/2016/03/11/lxd-2-0-introduction-to-lxd-112/

Ansible vs Chef

I wrote an earlier post about evaluating Ansible as an alternative to Chef. So after spending many years with Chef, I’ve found that Ansible is a lot easier to manage with startups. It’s easier to train developers, it’s easier to manage inventory and orchestration, and it works reasonably well on the scale of thousands of hosts. And let’s face it, if you have more than that, you’ll have to start partitioning. If you’re using Puppet, you’ll have multiple puppet masters, I’m assuming the same with Salt. For Chef, there’s the enterprise offering which gets pretty expensive and then you’ll have to deal with a slow search, so invariably there’s some usage of chef-solo going on and some self-configuration. But for most use cases, Ansible is just fine and is quick to both setup and is ridiculously easy to maintain. This is a fairly advanced post comparing Chef to Ansible and the patterns that are available…

http://tjheeta.github.io/2015/04/15/ansible-vs-chef/