Redis 4.0 Compatibility in Amazon ElastiCache

Amazon ElastiCache makes it easy for you to set up a fully managed in-memory data store and cache with Redis or Memcached. Today we’re pleased to launch compatibility with Redis 4.0 in ElastiCache. You can now launch Redis 4.0 compatible ElastiCache nodes or clusters, in all commercial AWS regions. ElastiCache Redis clusters can scale to terabytes of memory and millions of reads / writes per second to serve the most demanding needs of games, IoT devices, financial applications, and web applications.

https://aws.amazon.com/blogs/aws/new-redis-4-0-compatibility-in-amazon-elasticache

A Quick Guide to Redis 3.2’s Geo Support

The Geo API has been around for a while, appearing in the Redis unstable branch about ten months ago and that was, in turn, based on work from 2014. There’s a bit of history in that development process, which being practical folk we’ll skip past and go straight to the stuff that makes your development day better.

At its simplest, the GEO API for Redis reduces longitude/latitude down into a geohash. Geohash is a technique developed in 2008 to represent locations with short string codes. The Geohash of a particular location, say Big Ben in London, would come out as “gcpuvpmm3f0” which is easier to pass around than “latitude 51.500 longitude -0.12455”. The longer the string, the more precise the geohash code.

That encoding into a string is good for humans and URLs but it isn’t particularly space efficient. The good news is geohashes can be encoded as binary and using 52 bits, a geohash gets down to 0.6 meter accuracy which is good enough for most uses. A 52-bit value which just happens to be able to be a small-enough integer to live in a Redis floating-point double safely and that’s what the Geo API works with behind the scenes.

https://www.compose.com/articles/a-quick-guide-to-redis-3-2s-geo-support/

A blazing fast geo database with LevelDB, Go and Geohashes

“You probably have heard of LevelDB it’s a blazing fast key value store (as a library not a daemon), that uses Snappy compression.
There is plenty of usages for it, the API is very simple at least in Go (I will be using Goleveldb).

The key is a []byte the value is a []byte so you can “get”, “put” & “delete” that’s it.

I needed a low memory low cpu system that could collect millions of geo data and query over them, Geohash has an interesting property you can encode longitude and latitude into a string : f2m616nn this hash represents the lat & long 46.770, -71.304 f2m616nn, if you shorten the string to f2m61 it still refers to the same lat & long but with less precisionsf2m61.
A 4 digits hash leads to 19545 meters precision, to perfom a lookup around a position you simply query for the 8 adjacent blocks.A Geohash library for Go.

precision/1k meters
1 ±2500
2 ±630
3 ±78
4 ±20
5 ±2.4
6 ±0.61
7 ±0.076
8 ±0.019
9 ±0.0024
10 ±0.00060
11 ±0.000074

Here you would store all of your data points matching a geohash to the same set.
Problem there is no such thing as a set in LevelDB.

But there is a cursor so you can seek to a position then iterate over the next or previous one (byte ordered).
So your data could be stored that way: 4 digits geohash + a uniq id.

Then you can perform proximity lookup by searching for the 8 adjacents hashes from the position your are looking with a precision of 20km, good but not very flexible.

We can have a more generic solution, first we need a key a simple int64 uniq id…”

http://blog.nobugware.com/post/2015/leveldb_geohash_golang/

Visualizing Geohash

“I recently had to process data about places, or points of interest, around the globe. It was intuitive to me to try organize these records by their location. The standard way to group hadoop records is to make the records in the same group share the key prefix. I needed to somehow convert a latitude, longitude in a string of characters and that is when found Geohash. It is a well known dimensionality reduction technique that transforms the two dimension spatial point (latitude,longitude) into a alphanumerical string, or hash.
I’ll describe the  details of the points of interest processing in a future post. In this post, I will describe Geohash visually because I believe it is easier for some people (like myself) to understand and it would had saved me a some time had anyone else done it…”

http://www.bigdatamodeling.org/2013/01/intuitive-geohash.html