A series of useful Nginx configuration tips.
This post is adapted from a presentation at nginx.conf 2016 by Yichun Zhang, Founder and CEO of OpenResty, Inc. This is the first of two parts of the adaptation. In this part, Yichun describes OpenResty’s capabilities and goes over web application use cases built atop OpenResty. In Part 2, Yichun looks at what a domain-specific language is in more detail.
You can view the complete presentation on YouTube.
Gixy is a tool to analyze Nginx configuration. The main goal of Gixy is to prevent security misconfiguration and automate flaw detection.
Currently supported Python versions are 2.7 and 3.5+.
With HTTP/2 in every browser, load balancing with automatic failover, IPv6, a sorry page, separate blog server, HTML5 SSE and A+ HTTPS.
HTTP/2 support in all browsers
For speed! One of the pages on our blog loads in 1.9s on HTTP 1.1. The same page loads in 600ms over HTTP/2.
If you’re working on IoT devices, which often require IPv6.
Load balancing between multiple app servers with automatic failover.
So you can upgrade your app without taking it offline.
A branded ‘sorry’ page
Just in case you break both the app servers at the same time.
A separate server that handles blogs and marketing content
So you can keep your blog independent of the main app and update it on its own schedule.
Correct proxy headers for working GeoIP and logging.
So your app servers can see the proper origin of browser requests, despite the proxy. Because asking customers for their country when you already know is a waste of their time.
Support for HTML5 Server Sent Events
For realtime streaming.
An A+ on the SSL Labs test
So the users can connect privately to your site.
The various www vs non-www, HTTP vs HTTPS combinations redirected to a single HTTPS site.
This ensures there’s only one, secure copy copy of every resource for both clarity and SEO purposes.
We encourage you to check out the official nginx docs. However…
This deployment guide explains how to configure global load balancing (GLB) of traffic for web domains hosted in Amazon Web Services (AWS) Elastic Compute Cloud (EC2). For high availability and improved performance, you set up multiple backend servers (web servers, application servers, or both) for a domain in two or more AWS regions. Within each region, NGINX Plus load balances traffic across the backend servers.
The AWS Domain Name System (DNS) service, Amazon Route 53, performs global load balancing by responding to a DNS query from a client with the DNS record for the region hosting the domain that is closest to the client. For best performance and predictable failover between regions, “closeness” is measured in terms of network latency rather than the actual geographic location of the client.
Kubernetes includes a feature called services which serve as a kind of load balancer for pods. When pods misbehave or otherwise stop working, sometimes you’ll want to remove the pod from the service without killing the pod.
This tutorial will demonstrate how you can use Corosync and Pacemaker with a Floating IP to create a high availability (HA) server infrastructure on DigitalOcean.
Corosync is an open source program that provides cluster membership and messaging capabilities, often referred to as the messaging layer, to client servers. Pacemaker is an open source cluster resource manager (CRM), a system that coordinates resources and services that are managed and made highly available by a cluster. In essence, Corosync enables servers to communicate as a cluster, while Pacemaker provides the ability to control how the cluster behaves.
When completed, the HA setup will consist of two Ubuntu 14.04 servers in an active/passive configuration. This will be accomplished by pointing a Floating IP, which is how your users will access your web service, to point to the primary (active) server unless a failure is detected. In the event that Pacemaker detects that the primary server is unavailable, the secondary (passive) server will automatically run a script that will reassign the Floating IP to itself via the DigitalOcean API. Thus, subsequent network traffic to the Floating IP will be directed to your secondary server, which will act as the active server and process the incoming traffic.
This diagram demonstrates the concept of the described setup:
Note: This tutorial only covers setting up active/passive high availability at the gateway level. That is, it includes the Floating IP, and the load balancer servers—Primary and Secondary. Furthermore, for demonstration purposes, instead of configuring reverse-proxy load balancers on each server, we will simply configure them to respond with their respective hostname and public IP address.
To achieve this goal, we will follow these steps:
- Create 2 Droplets that will receive traffic
- Create Floating IP and assign it to one of the Droplets
- Install and configure Corosync
- Install and configure Pacemaker
- Configure Floating IP Reassignment Cluster Resource
- Test failover
- Configure Nginx Cluster Resource
Editor – This is the fourth in a series of blog posts that explore the new features in NGINX Plus R10 in depth. This list will be expanded as later articles are published.
- Authenticating API Clients with JWT and NGINX Plus
- NGINX Plus R10 Harnesses IBM POWER
- Authenticating Users to Existing Applications with OpenID Connect and NGINX Plus