Redis – RCP 11 – The stream data type

During the morning (CEST) of May 20th 2016 I was spending some time in the #redis channel on IRC. At some point Timothy Downs, nickname forkfork wrote the following messages:

<forkfork> the module I'm planning on doing is to add a transaction log style data type - meaning that a very large number of subscribers can do something like pub sub without a lot of redis memory growth
<forkfork> subscribers keeping their position in a message queue rather than having redis maintain where each consumer is up to and duplicating messages per subscriber

Now this is not very far from what Apache Kafka provides, from the point of view of an abstract data structure. At the same time it is somewhat similar to what Redis lists and Pub/Sub already provide in one way or the other. All this functionality overlapping in the past prevented me from considering such a data structure.

However after Timothy message I started to think about it. Such a simple data structure actually allows users to model problems using Redis that are currently very hard to model both using lists and Pub/Sub.

  • Lists cannot be accessed efficiently in the middle, since the seek complexity is O(N).
  • There is no notion of offset in the list, so if old elements are evicted, there is no longer a way for clients to read only new elements, or to rewind to a past known position.
  • A log is inherently more compact and memory efficient, since we don’t need to account for removal of elements in the middle.
  • Pub/Sub has no efficient way to persist an history of messages. There were ideas to implement such a feature, but it always looks far fetched since the whole Pub/Sub mechanism in Redis is designed towards fire-and-forget workloads.
  • Pub/Sub has a cost which is related to the number of clients listening for a given channel.
  • There is no consumer group concept in lists and Pub/Sub, which is a very interesting abstraction in order to have groups of clients that receive different messages, yet another group can receive the same set of messages if needed. If you are not familiar with Kafka, consumer groups are sets of clients sharing the offset of the latest consumed offset, so that all the clients in the same group will receive different messages. Yet each of these clients can independently rewind if they want to consume the same messages again.

The above shortcomings make the existing data structures a problem when trying to implement streams of data. Streams should have the following characteristic:

  • Clients should have control on what they want to read, it should be possible to rewind back in time, consumer groups should be implemented.
  • Blocking operations should be provided so that a client may block waiting for entires having an offset greater than a given one.
  • Each entry in the log should have an unique offset that remains the same if old entries are evicted.
  • Memory efficiency should be very good in order to take a big amount of history in memory without problems. Since the data structure should not have a big overhead due to nodes and pointers of other data structures, this should be possible.
  • Efficient access of elements in the middle should be possible even with many millions of entires. Let’s say that with 100 million entries still to seek in the middle should not be obviously slow.
  • It should be possible to gradually evict old entries.
  • The log should be efficiently persisted on RDB and AOF files to avoid to be ephemeral like Pub/Sub is.

The above features allow multiple clients to read from the same stream of data efficiently, without requiring any polling, and with the ability to restart from a given point after a disconnection.

However certain goals stated above have tensions. For example in order to be able to evict old data, and still have good access time when accessing entries in the middle of a very large log, seems to require a linked data structure that is not memory efficient. We’ll explore possible options in the implementation details option.

Also more advanced features should be considered, starting from the experience of what is not possible with Apache Kafka but what would be useful in Redis streams: since Redis operates in memory and most of the usages would not require very strong data retention abilities, we could probably try to do more in order to maximize what it is possible to do with such a data structure in a system like Redis.