RethinkDB is an open-source JSON database management system written in C++. It is intended for the real-time web applications that require continuously updated query results.
RethinkDB was founded in 2009. The very first version of RethinkDB was an SSD-optimized storage engine for MySQL. They then switched to build a document DBMS like MongoDB.
The first release of the current RethinkDB DBMS architecture was in Nov 2012. This first version supported the JSON data model, immediate consistency support, Hadoop-style map/reduce, sharing, multi-datacenter replication, and failover. In June 2013, RethinkDB introduced new features for ReQL, such as basic access control, regular expression matching, array operations, and random sampling.
The version 2.0 of RethinkDB in 2012 was the first "production-ready" release. In August 2015, it supported automatic failover using a Raft-based protocol. In November 2015, RethinkDB introduced atomic changefeeds, which include existing values from the database into the changefeed result, and then atomically transition to streaming updates.
In October 2016, RethinkDB company shut down. The reason was they could not build a sustainable business. After one year, the source code was purchased by the Cloud Native Computing Foundation, where it was then released back to the open-source community effort in July 2017.
The RethinkDB storage engine is log-structured. There are some implementation details with mini-logs and flushes implemented for efficiency. It was done on a small-scale basis, compared to a traditional system. So, the traditional notion of checkpoints does not quite apply -- there is no separate log and pages that are periodically flushed.
Multi-version Concurrency Control (MVCC)
RethinkDB implements block-level multiversion concurrency control. When a write operation comes while there is an ongoing read operation, RethinkDB takes a snapshot of the B-Tree for each relevant shard. Then it maintains different versions of the blocks in order to execute read and write operations concurrently.
RethinkDB takes exclusive block-level locks when multiple writes are performed on documents when they are close to each other in B-Tree. In the most case, it will not present performance problems because the top levels of B-Tree are cached along with the frequently used blocks.
RethinkDB stores JSON documents with a binary on disk serialization. The data types supported by RethinkDB are: number (double precision floating-point), string, boolean, array, object, null.
RethinkDB index the data based on the primary key. If the user did not specify the primary key, a random unique is generated for the index automatically. RethinkDB to place the document into an appropriate shard based on primary key, and index it within that shard using a B-Tree data structure.
RethinkDB supports both secondary and compound indexes.
On a single node, the isolation level is closest to repeatable read. It's done via copy-on-write, so the user can execute long-range reads, do concurrent writes, and have each read query run on its own snapshot constructed just in time.
On a cluster, it is true on a per primary node basis. That is, if the table is shared across two nodes and you execute a long query along with concurrent writes, there is no synchronization of snapshot creation across primary nodes. So your range query may execute on two snapshots taken at different points in time.
RethinkDB supports immediately consistent.
In RethinkDB, a single authoritative primary replica will be in charge of a shard of data. When reads and writes come to a given shard, they will get directed to their respective primary. Data remains immediately consistent and conflict-free. A read following the acknowledged write is always guaranteed to see the write.
RethinkDB supports both up-to-date and out-of-date reads. By default, the client always sees the latest, consistent, artifact-free view of the data. The developer can also do a read query for out-of-date data. In this mode, the query may be routed to its closet replica. Out-of-date queries may have lower latency and have stronger availability guarantees.
In RethinkDB, joins are automatically distributed. The appropriate nodes will receive the join commands. Then the combined data will be presented to the user.
It supports using primary keys and secondary indexes to join the data.
The data is stored in a log-structured storage engine built specifically for RethinkDB and inspired by the architecture of BTRFS. The log is implicitly integrated into the storage engine.
For data replication across the replicas, it doesn't require log-shipping. RethinkDB replication is based on B-Tree diff algorithms.
All queries are automatically parallelized on the RethinkDB server. It could also break complicated queries up into stages, and execute each stage in parallel. Then it will combine the data to return a complete result.
RethinkDB provides a unified chainable query language. It can start with a table and incrementally chain transformer operations to the end of the query. It supports CRUD operations, aggregations including map-reduce & group-map-reduce, joins, full sub-queries and changefeeds.
Changefeeds allow clients to receive changes on a table from a specific query when they happen. Nearly any ReQL query can become a changefeed. When specifying the start point, the changefeed stream will start with the current contents of the monitored table.
The data is stored in a log-structured storage engine built specifically for RethinkDB and inspired by the architecture of BTRFS, which is a file system based on the copy-on-write (COW) principle
The storage engine is also used in conjunction with a custom B-Tree-aware caching engine which allows file sizes much greater than the amount of memory.
N-ary Storage Model (Row/Record)
RethinkDB organizes data based on rows like a traditional database does. It does not have a column-oriented storage engine.
The data is stored in a log-structured storage engine built specifically for RethinkDB and inspired by the architecture of BTRFS.
In RethinkDB, a single authoritative primary replica will be in charge of a shard of data. Given the primary replica, every replica is exactly the same. When reads and writes come to a given shard, they will get directed to their respective primary.
Cloud Native Computing Foundatio