Lessons learned scaling PostgreSQL database to 1.2bn records/month

“This isn’t my first rodeo with large datasets. The authentication and product management database that I have designed for the largest UK public Wi-Fi provider had impressive volumes too. We were tracking authentication for millions of devices daily. However, that project had a funding that allowed us to pick any hardware, any supporting services and hire any DBAs to assist with replication/data warehousing/troubleshooting. Furthermore, all analytics queries/reporting were done off logical replicas and there were multiple sysadmins that looked after the supporting infrastructure. Whereas this was a venture of my own, with limited funding and 20x the volume.

Others’ mistakes

This is not to say that if we did have loadsamoney we would have spent it on purchasing top-of-the-line hardware, flashy monitoring systems or DBAs (Okay, maybe having a dedicated DBA would have been nice). Over many years of consulting I have developed a view that the root of all evil lies in the unnecessarily complex data processing pipeline. You don’t need a message queue for ETL and you don’t need an application-layer cache for database queries. More often than not, these are workarounds for the underlying database issues (e.g. latency, poor indexing strategy) that create more issues down the line. In ideal scenario, you want to have all data contained within a single database and all data loading operations abstracted into atomic transactions. My goal was not to repeat these mistakes.

Our goals

As you have already guessed, our PostgreSQL database became the central piece of the business (aptly called ‘mother’, although my co-founder insists that me calling various infrastructure components ‘mother’, ‘mothership’, ‘motherland’, etc is worrying). We don’t have a standalone message queue service, cache service or replicas for data warehousing. Instead of maintaining the supporting infrastructure, I have dedicated my efforts to eliminating any bottlenecks by minimizing latency, provisioning the most suitable hardware, and carefully planning the database schema. What we have is an easy to scale infrastructure with a single database and many data processing agents. I love the simplicity of it — if something breaks, we can pin point and fix the issue within minutes. However, a lot of mistakes were done along the way — this articles summarizes some of them…”

https://medium.com/@gajus/lessons-learned-scaling-postgresql-database-to-1-2bn-records-month-edc5449b3067

Containers the hard way: Gocker: A mini Docker written in Go

They are popular and they are misunderstood. Containers have become the default way applications are packaged and run on servers, initially popularized by Docker. Now, Docker itself is misunderstood. It is the name of a company and a command (a suite of commands, rather) that allow you to manage containers (create, run, delete, network) easily. Containers themselves however, are created from a set of operating system primitives. In this article, we shall concern ourselves with containers on the Linux operating system and simply act as though containers on Windows do not exist at all.

There is no single system call under Linux that creates containers. They are a loose construct made by utilizing Linux namespaces and control groups or cgroups…


https://unixism.net/2020/06/containers-the-hard-way-gocker-a-mini-docker-written-in-go/

Leveraging ULIDs to create order in unordered datastores

The rise of distributed data stores and the general decomposition of systems into smaller pieces means that coordination between each server, service, or function is less available. In my first applications, unique ID generation meant setting auto_increment=True on a column in the SQL database. Easy, done, no problem. Today, each microservice has its own data source(s) and NoSQL stores are common. Every NoSQL DB is “NoSQL” in its own way, but they usually eschew coordinated and single-writer solutions in the name of reliability/performance/both. You can’t have an auto-increment column without implementing the coordination client-side.

Using numbers as identifiers also creates problems. Auto-incrementing can lead to enumeration-based attacks. Fields can have fixed sizes. These issues can go unrealized until you overflow the uint32 field, and now your logs are a pile of ID conflict errors. Instead of integers, we can use a different kind of fixed-length field and make it non-sequential so that different hosts can generate IDs without a central coordinating point.

UUID’s are an improvement and avoid collisions in distributed settings, but being strictly random you don’t have a way to easily sort them or determine rough order. Segment blogged a while ago about one replacement for UUIDs with the KSUID (K-Sortable Universal ID) but it has limitations and uses a strange 14e8 offset to avoid running out of epoch time in the next 100 years.

Enter the Unique Lexicographically Sortable Identifier (ULID). These are sortable, high-entropy identifiers that we can generate anywhere in our pipeline without coordination and have confidence that there won’t be collisions. A ULID looks like 01E5TZRCM5WZYPB2BH7KMYR5HT, and the first 10 characters are a timestamp, and the next 16 characters are random.

https://www.trek10.com/blog/leveraging-ulids-to-create-order-in-unordered-datastores

A new Go API for Protocol Buffers

Introduction

We are pleased to announce the release of a major revision of the Go API for protocol buffers, Google’s language-neutral data interchange format.

Motivations for a new API

The first protocol buffer bindings for Go were announced by Rob Pike in March of 2010. Go 1 would not be released for another two years.

In the decade since that first release, the package has grown and developed along with Go. Its users’ requirements have grown too.

Many people want to write programs that use reflection to examine protocol buffer messages. The reflect package provides a view of Go types and values, but omits information from the protocol buffer type system. For example, we might want to write a function that traverses a log entry and clears any field annotated as containing sensitive data. The annotations are not part of the Go type system.

Another common desire is to use data structures other than the ones generated by the protocol buffer compiler, such as a dynamic message type capable of representing messages whose type is not known at compile time.

We also observed that a frequent source of problems was that the proto.Message interface, which identifies values of generated message types, does very little to describe the behavior of those types. When users create types that implement that interface (often inadvertently by embedding a message in another struct) and pass values of those types to functions expecting a generated message value, programs crash or behave unpredictably.

All three of these problems have a common cause, and a common solution: The Message interface should fully specify the behavior of a message, and functions operating on Message values should freely accept any type that correctly implements the interface.

Since it is not possible to change the existing definition of the Message type while keeping the package API compatible, we decided that it was time to begin work on a new, incompatible major version of the protobuf module.

Today, we’re pleased to release that new module. We hope you like it.

https://blog.golang.org/a-new-go-api-for-protocol-buffers