Aurora Serverless MySQL Generally Available

You may have heard of Amazon Aurora, a custom built MySQL and PostgreSQL compatible database born and built in the cloud. You may have also heard of serverless, which allows you to build and run applications and services without thinking about instances. These are two pieces of the growing AWS technology story that we’re really excited to be working on. Last year, at AWS re:Invent we announced a preview of a new capability for Aurora called Aurora Serverless. Today, I’m pleased to announce that Aurora Serverless for Aurora MySQL is generally available. Aurora Serverless is on-demand, auto-scaling, serverless Aurora. You don’t have to think about instances or scaling and you pay only for what you use.

This paradigm is great for applications with unpredictable load or infrequent demand. In production, you can save on costs by adjusting to scale based on actual load, in extremely granular increments – matching your demand curve almost perfectly. In development, you can save on costs by automatically pausing the cluster (scale to zero!) when it’s not in use. I’m excited to show you how this all works so let’s look at how we launch a Serverless Aurora cluster.

https://aws.amazon.com/blogs/aws/aurora-serverless-ga/

Use all the Databases

Ever wanted to use a few different databases to build your app? Different types of databases are meant for different purposes, so it often makes sense to combine them. You might be hesitant due to the complexity of maintenance and coding, but it can be easy if you combine Compose and GraphQL: instead of writing a number of complex REST endpoints, each querying multiple databases, you set up a single GraphQL endpoint that provides whatever data the client wants using your simple data fetching functions.

This tutorial is meant for anyone who provides or fetches data, whether it’s a backend dev writing an API (in any language) or a frontend web or mobile dev fetching data from the server. We’ll learn about the GraphQL specification, set up a GraphQL server, and fetch data from five different data sources. The code is in Javascript, but you’ll still get a good idea of GraphQL without knowing the language.

https://www.compose.com/articles/use-all-the-databases-part-1/

Living Without Atomic Clocks

It’s a fact that the design of CockroachDB is based on Google’s Spanner data storage system. One of the most surprising and inspired facets of Spanner is its use of atomic clocks and GPS clocks to give participating nodes really accurate wall time synchronization. The designers of Spanner call this ‘TrueTime’, and it provides a tight bound on clock offset between any two nodes in the system. TrueTime enables high levels of external consistency. As an open source database based on Spanner, our challenge was in providing similar guarantees of external consistency without atomic clocks.

If someone knows even a little about Spanner, one of the first questions they have is: “You can’t be using atomic clocks if you’re building an open source database; so how the heck does CockroachDB work?”

It’s a very good question.

CockroachDB was designed to work without atomic clocks or GPS clocks. It’s an open source database intended to be run on arbitrary collections of nodes: from physical servers in a corp development cluster to public cloud infrastructure using the flavor-of-the-month virtualization layer. It’d be a showstopper to require an external dependency on specialized hardware for clock synchronization.

So what does CockroachDB do instead? Well, before answering that question, it’ll be helpful to dig a little deeper into why TrueTime was conceived for Spanner.

https://www.cockroachlabs.com/blog/living-without-atomic-clocks/

PostgreSQL Exercises

Welcome to PostgreSQL Exercises! This site was born when I noticed that there’s a load of material out there to help people learn about SQL, but not a great deal to make it easy to learn by doing. PGExercises provides a series of questions and explanations built on a single, simple dataset. It’s designed for use as a partner to a good book or Postgres’ excellent documentation.

The exercises on this site range from simple select and where clauses, through joins and case statements, and on to aggregations, window functions, and recursive queries. Most people who aren’t already pros should find something to test themselves with.

For an introduction to the dataset, go to Getting Started, then select an exercise category from the menu and go!

https://pgexercises.com/

Records: SQL for Humans™

Records is a very simple, but powerful, library for making raw SQL queries to most relational databases.

Just write SQL. No bells, no whistles. This common task can be surprisingly difficult with the standard tools available. This library strives to make this workflow as simple as possible, while providing an elegant interface to work with your query results.

Database support includes Postgres, MySQL, SQLite, Oracle, and MS-SQL (drivers not included)…”

https://github.com/kennethreitz/records
https://pypi.python.org/pypi/records/