SingleStore

MemSQL is a distributed in-memory relational database with high performance on both transactional and analytical workload, well-integrated with Spark & Kafka for real-time analysis.

History

MemSQL was a Y-combinator graduate, founded in 2011.

Query Interface

SQL

MemSQL supports a subset of MySQL syntax, plus extensions for distributed SQL, geospatial and JSON queries. MySQL wire protocol is supported.

Views

Virtual Views

Views in MemSQL is not materialzed and cannot be written into.

Concurrency Control

Multi-version Concurrency Control (MVCC)

MemSQL uses multi-version concurrency control. Reads are not blocked, writes acquire row-level locks.

Logging

Physical Logging

MemSQL implements write-ahead-logging which records only committed transactions. It uses a transaction buffer pool as back-pressure mechanism so that a worker thread doesn't generate indefinite amount of logs. Replication is implemented based on log recovery.

Joins

Nested Loop Join Hash Join Sort-Merge Join Broadcast Join

Nested loop join, index-nested loop join, merge join and hash join are supported in MemSQL. Joins between two Columnstore tables are often executed as sort merge join. For distributed join queries, if two tables are joined with identical shard key, the join will be performed locally; otherwise dataset is broadcast to other nodes via the network.

Storage Model

Decomposition Storage Model (Columnar) N-ary Storage Model (Row/Record)

In MemSQL, row segments in rowstore are stored in N-ary storage model in-memory. Column segments in columnstore are stored in decomposition storage model. Clustered columnar indexes are created for columnstore and compression is applied.

Query Execution

Tuple-at-a-Time Model Vectorized Model

MemSQL uses Tuple-at-a-Time Model for rowstore query execution, uses Vectorized Model for columnstore.

Stored Procedures

Not Supported

Stored procedure is not supported.

Indexes

Skip List Hash Table

Skip list is the default index type in MemSQL. Skip list is lock free and thus leads extremely fast insert performance, and O(lg(n)) expected lookup/insert/delete performance. Unlike B+ tree, skip list is singly linked, thus reserve scan leads to twice as costly as forward scan. Skip list involves more pointer chasing than B+ tree which could potentially lead to more cache misses. In MemSQL, heuristics are applied to organize nearby nodes on the same physical page to mitigate penalties caused by pointer chasing. Lock-free hash table is also supported in MemSQL to perform fast exact-match queries.

Storage Architecture

Disk-oriented In-Memory

In MemSQL, rowstore is completely in-memory and columnstore is disk-backed.

Checkpoints

Non-Blocking Consistent

MemSQL uses multi-version concurrency control and it's natural to create a consistent(non-fuzzy) snapshot without the need to block ongoing transactions.

Data Model

Relational Key/Value

MemSQL is a distributed relational database. It also supports two-column key/value store.

System Architecture

Shared-Nothing

MemSQL has a two-tier, clustered architecture. The nodes in upper tier are aggregators, which are cluster-aware query routers. One special node called Master Aggregator is responsible for clustering monitoring. The nodes in lower tier are leaves, which store and process partitioned shards. The aggregator sends extended SQL to leaves to perform distributed query execution.

Query Compilation

Code Generation JIT Compilation

Instead of the traditional interpreter based execution model, MemSQL 5 comes with a new code generation architecture, which compiles a SQL query to LLVM to machine code. When the MemSQL server encounters a SQL query, it parses SQL into AST and extracts parameters from the query, which is then transformed into a MemSQL-specific intermediate representation in MemSQL Plan Language(MPL). MemSQL then flattens MPL AST into a more compact format as MemSQL Bytecode(MBC). Plans in MBC format is then transformed into LLVM Bitcode, which LLVM uses to generate machine code. Such code generation architecture enables many low-level optimizations and avoids much less unnecessary work compared to interpreter-based execution. Compiled plans are also cached on disk for future use.