This tutorial will demonstrate how you can use Corosync and Pacemaker with a Floating IP to create a high availability (HA) server infrastructure on DigitalOcean.
Corosync is an open source program that provides cluster membership and messaging capabilities, often referred to as the messaging layer, to client servers. Pacemaker is an open source cluster resource manager (CRM), a system that coordinates resources and services that are managed and made highly available by a cluster. In essence, Corosync enables servers to communicate as a cluster, while Pacemaker provides the ability to control how the cluster behaves.
When completed, the HA setup will consist of two Ubuntu 14.04 servers in an active/passive configuration. This will be accomplished by pointing a Floating IP, which is how your users will access your web service, to point to the primary (active) server unless a failure is detected. In the event that Pacemaker detects that the primary server is unavailable, the secondary (passive) server will automatically run a script that will reassign the Floating IP to itself via the DigitalOcean API. Thus, subsequent network traffic to the Floating IP will be directed to your secondary server, which will act as the active server and process the incoming traffic.
This diagram demonstrates the concept of the described setup:
Note: This tutorial only covers setting up active/passive high availability at the gateway level. That is, it includes the Floating IP, and the load balancer servers—Primary and Secondary. Furthermore, for demonstration purposes, instead of configuring reverse-proxy load balancers on each server, we will simply configure them to respond with their respective hostname and public IP address.
To achieve this goal, we will follow these steps:
- Create 2 Droplets that will receive traffic
- Create Floating IP and assign it to one of the Droplets
- Install and configure Corosync
- Install and configure Pacemaker
- Configure Floating IP Reassignment Cluster Resource
- Test failover
- Configure Nginx Cluster Resource
At GitHub we use MySQL as our main datastore. While repository data lies in
git, metadata is stored in MySQL. This includes Issues, Pull Requests, Comments etc. We also auth against MySQL via a custom git proxy (babeld). To be able to serve under the high load GitHub operates at, we use MySQL replication to scale out read load.
We have different clusters to provide with different types of services, but the single-writer-multiple-readers design applies to them all. Depending on growth of traffic, on application demand, on operational tasks or other constraints, we take replicas in or out of our pools. Depending on workloads some replicas may lag more than others.
Displaying up-to-date data is important. We have tooling that helps us ensure we keep replication lag at a minimum, and typically it doesn’t exceed
1 second. However sometimes lags do happen, and when they do, we want to put aside those lagging replicas, let them catch their breath, and avoid sending traffic their way until they are caught up.
We set out to create a self-managing topology that will exclude lagging replicas automatically, handle disasters gracefully, and yet allow for complete human control and visibility.
HAProxy is a powerful load balancer. It embeds many options and many configuration styles in order to give a solution to many load balancing problems. However, HAProxy is not universal and some special or specific problems doesn’t have solution with the native software.
This text is not a full explanation of the Lua syntax.
This text is not a replacement of the HAProxy Lua API documentation. The API documentation can be found at the project root, in the documentation directory. The goal of this text is to discover how Lua is implemented in HAProxy and using it efficiently.
However, this can be read by Lua beginners. Some examples are detailed.