On 07/26/2016 01:53 PM, Josh Berkus wrote:
> The write amplification issue, and its correllary in VACUUM, certainly
> continues to plague some users, and doesn’t have any easy solutions.
To explain this in concrete terms, which the blog post does not:
1. Create a small table, but one with enough rows that indexes make
sense (say 50,000 rows).
2. Make this table used in JOINs all over your database.
3. To support these JOINs, index most of the columns in the small table.
4. Now, update that small table 500 times per second.
That’s a recipe for runaway table bloat; VACUUM can’t do much because
there’s always some minutes-old transaction hanging around (and SNAPSHOT
TOO OLD doesn’t really help, we’re talking about minutes here), and
because of all of the indexes HOT isn’t effective. Removing the indexes
is equally painful because it means less efficient JOINs.
The Uber guy is right that InnoDB handles this better as long as you
don’t touch the primary key (primary key updates in InnoDB are really bad).
This is a common problem case we don’t have an answer for yet.
Red Hat OSAS
(any opinions are my own)
The early architecture of Uber consisted of a monolithic backend application written in Python that used Postgres for data persistence. Since that time, the architecture of Uber has changed significantly, to a model of microservices and new data platforms. Specifically, in many of the cases where we previously used Postgres, we now use Schemaless, a novel database sharding layer built on top of MySQL. In this article, we’ll explore some of the drawbacks we found with Postgres and explain the decision to build Schemaless and other backend services on top of MySQL.
The Architecture of Postgres
We encountered many Postgres limitations:
- Inefficient architecture for writes
- Inefficient data replication
- Issues with table corruption
- Poor replica MVCC support
- Difficulty upgrading to newer releases
We’ll look at all of these limitations through an analysis of Postgres’s representation of table and index data on disk, especially when compared to the way MySQL represents the same data with its InnoDB storage engine. Note that the analysis that we present here is primarily based on our experience with the somewhat old Postgres 9.2 release series. To our knowledge, the internal architecture that we discuss in this article has not changed significantly in newer Postgres releases, and the basic design of the on-disk representation in 9.2 hasn’t changed significantly since at least the Postgres 8.3 release (now nearly 10 years old).
Why Uber Engineering Switched from Postgres to MySQL
Every once in a while, someone suggests that beets should use a “real database.” I think this means storing music metadata in PostgreSQL or MySQL as an alternative to our current SQLite database. The idea is that a more complicated DBMS should be faster, especially for huge music libraries.
The pseudo-official position of the beets project is that supporting a new DBMS is probably not worth your time. If you’re interested in performance, please consider helping to optimize our database queries instead.
There are three reasons I’m unenthusiastic about alternative DBMSes: I’m skeptical that they will actually help performance; it’s a clear case of premature optimization; and SQLite is unbeatably convenient.
If you are looking to get your hands dirty and learn all about Docker, then look no further!
In this article I’m going to show you how Docker works, what all the fuss is about, and how Docker can help with a basic development task – building a microservice.
We’ll use a simple Node.js service with a MySQL backend as an example, going from code running locally to containers running a microservice and database.
When testing an application in MySQL 5.6, I came across a few interesting issues. These weren’t necessarily changes in MySQL between version 5.5 and 5.6, but rather the packages I used to install MySQL 5.6.
“Records is a very simple, but powerful, library for making raw SQL queries to most relational databases.
Just write SQL. No bells, no whistles. This common task can be surprisingly difficult with the standard tools available. This library strives to make this workflow as simple as possible, while providing an elegant interface to work with your query results.
Database support includes Postgres, MySQL, SQLite, Oracle, and MS-SQL (drivers not included)…”
The benchmark is sysbench-mariadb (sysbench trunk with a fix for a more scalable random number generator) OLTP simplified to do 1000 point selects per transaction. The data set is 1 million rows in 20 tables. Fewer tables can be used, but below 4 tables the performance drops somewhat due to a hot spot in the table definition cache.
This is the my.cnf used for this test:
max_connections = 400
table_open_cache = 800
query_cache_type = 0
innodb_buffer_pool_size = 512M
innodb_buffer_pool_instances = 10
innodb_adaptive_hash_index_partitions = 20
And this the sysbench command line:
sysbench-mariadb --test=lua/oltp.lua --oltp-tables-count=20 \
--oltp-table-size=50000 --num-threads=... --oltp-read-only=on \
--oltp-point-selects=1000 --oltp-distinct-ranges=0 \
--oltp-simple-ranges=0 --oltp-sum-ranges=0 --oltp-order-ranges=0 \
--max-time=100 --max-requests=0 run