Learn Docker by building a Microservice

If you are looking to get your hands dirty and learn all about Docker, then look no further!

In this article I’m going to show you how Docker works, what all the fuss is about, and how Docker can help with a basic development task – building a microservice.

We’ll use a simple Node.js service with a MySQL backend as an example, going from code running locally to containers running a microservice and database.


Records: SQL for Humans™

Records is a very simple, but powerful, library for making raw SQL queries to most relational databases.

Just write SQL. No bells, no whistles. This common task can be surprisingly difficult with the standard tools available. This library strives to make this workflow as simple as possible, while providing an elegant interface to work with your query results.

Database support includes Postgres, MySQL, SQLite, Oracle, and MS-SQL (drivers not included)…”


MariaDB 10.1 can do 1 million queries per second

The benchmark is sysbench-mariadb (sysbench trunk with a fix for a more scalable random number generator) OLTP simplified to do 1000 point selects per transaction. The data set is 1 million rows in 20 tables. Fewer tables can be used, but below 4 tables the performance drops somewhat due to a hot spot in the table definition cache.

This is the my.cnf used for this test:

max_connections = 400
table_open_cache = 800
query_cache_type = 0
innodb_buffer_pool_size = 512M
innodb_buffer_pool_instances = 10
innodb_adaptive_hash_index_partitions = 20

And this the sysbench command line:

sysbench-mariadb --test=lua/oltp.lua --oltp-tables-count=20 \
--oltp-table-size=50000 --num-threads=... --oltp-read-only=on \
--oltp-point-selects=1000 --oltp-distinct-ranges=0 \
--oltp-simple-ranges=0 --oltp-sum-ranges=0 --oltp-order-ranges=0 \
--max-time=100 --max-requests=0 run


Sharding Pinterest: How we scaled our MySQL fleet

“This is a technical dive into how we split our data across many MySQL servers. We finished launching this sharding approach in early 2012, and it’s still the system we use today to store our core data.

Before we discuss how to split the data, let’s be intimate with our data. Mood lighting, chocolate covered strawberries, Star Trek quotes…

Pinterest is a discovery engine for everything that interests you. From a data perspective, Pinterest is the largest human curated interest graph in the world. There are more than 50 billion Pins that have been saved by Pinners onto one billion boards. People repin and like other Pins (roughly a shallow copy), follow other Pinners, boards and interests, and view a home feed of all the Pinners, boards and interests they follow. Great! Now make it scale!

Growing pains

In 2011, we hit traction. By some estimates, we were growing faster than any other previous startup. Around September 2011, every piece of our infrastructure was over capacity. We had several NoSQL technologies, all of which eventually broke catastrophically. We also had a boatload of MySQL slaves we were using for reads, which makes lots of irritating bugs, especially with caching. We re-architected our entire data storage model. To be effective, we carefully crafted our requirements…”


Getting out of MySQL Character Set Hell

“After several days, a lot of Googling, reading of (mostly unhelpful) support mailing lists and much experimentation I felt I had accumulated a pretty solid understanding of how things had gotten the way they did in our customer’s database. What’s more, unlike most of the other blog articles and other web pages you’ll probably read on this, I felt I had discovered a relatively simple procedure for getting out of this situation. Since I wasn’t able to find any other authoritative source on this on the internet (and in fact, most of the other sources I’ve seen have said you really don’t want to be in this situation– while offering little help as to what to do about it if you’re already there), I thought writing a public document on the subject might help some other systems administrators out there who do find themselves unexpectedly in the middle of MySQL Character Set Hell…”


MySQL dumps

“As part of the HTTP Archive project, I create MySQL dumps for each crawl (on the 1st and 15th of each month). You can access the list of dumps from the downloads page. Several people use these dumps, most notably Ilya Grigorik who imports the data into Google BigQuery.

For the last year I’ve hesitated on many feature requests because they require schema changes. I wasn’t sure how changing the schema would affect the use of the dump files that preceded the change. This blog post summarizes my findings…”


Under the hood: MySQL Pool Scanner (MPS)

“Facebook has one of the largest MySQL database clusters in the world. This cluster comprises many thousands of servers across multiple data centers on two continents.

Operating a cluster of this size with a small team is achieved by automating nearly everything a conventional MySQL Database Administrator (DBA) might do so that the cluster can almost run itself. One of the core components of this automation is a system we call MPS, short for “MySQL Pool Scanner.”

MPS is a sophisticated state machine written mostly in Python. It replaces a DBA for many routine tasks and enables us to perform maintenance operations in bulk with little or no human intervention…”