- How does Datalog compare to SQL
Datalog is a well-established deductive query language that combines facts and rules during execution to achieve the same power as relational algebra
recursion (e.g. SQL with Common Table Expressions). Datalog makes heavy use of efficient joins over granular indexes which removes any need for thinking about upfront normalisation and query shapes. Datalog already has significant traction in both industry and academia.
The EdgeDB team wrote a popular blog post outlining the shortcomings of SQL and Datalog is the only broadly-proven alternative. Additionally the use of EDN Datalog from Clojure makes queries "much more programmable" than the equivalent of building SQL strings in any other language, as explained in this blog post.
We plan to provide limited SQL/JDBC support for Crux in the future, potentially using Apache Calcite.
- How does Crux compare to Datomic (On-Prem)?
At a high level Crux is bitemporal, document-centric, schemaless, and designed to work with Kafka as an "unbundled" database. Bitemporality provides a user-assigned "valid time" axis for point-in-time queries in addition to the underlying system-assigned "transaction time". The main similarities are that both systems support EDN Datalog queries (though they not compatible), are written using Clojure, and provide elegant use of the database "as a value".
In the excellent talk "Deconstructing the Database" by Rich Hickey, he outlines many core principles that informed the design of both Datomic and Crux:
Declarative programming is ideal
SQL is the most popular declarative programming language but most SQL databases do not provide a consistent "basis" for running these declarative queries because they do not store and maintain views of historical data by default
Client-server considerations should not affect how queries are constructed
Recording history is valuable
All systems should clearly separate reaction and perception: a transactional component that accepts novelty and passes it to an indexer that integrates novelty into the indexed view of the world (reaction) + a query support component that accepts questions and uses the indexes to answer the questions quickly (perception)
Traditionally a database was a big complicated thing, it was a special thing, and you only had one. You would communicate to it with a foreign language, such as SQL strings. These are legacy design choices
Questions dominate in most applications, or in other words, most applications are read-oriented. Therefore arbitrary read-scalability is a more general problem to address than arbitrary write-scalability (if you need arbitrary write-scalability then you inevitably have to sacrifice system-wide transactions and consistent queries)
Using a cache for a database is not simple and should never be viewed an architectural necessity: "When does the cache get invalidated? It’s your problem!"
The relational model makes it challenging to record historical data for evolving domains and therefore SQL databases do not provide an adequate "information model"
Accreting "facts" over time provides a real information model and is also simpler than recording relations (composite facts) as seen in a typical relational database
RDF is an attempt to create a universal schema for information using
[subject predicate object]triples as facts. However RDF triples are not sufficient because these facts do not have a temporal component (e.g. timestamp or transaction coordinate)
Perception does not require coordination and therefore queries should not affect concurrently executing transactions or cause resource contention (i.e. "stop the world")
"Reified process" (i.e. transaction metadata and temporal indexing) should enable efficient historical queries and make interactive auditing practical
Enabling the programmer to use the database "as a value" is dramatically less complex than working with typical databases in a client-server model and it very naturally aligns with functional programming: "The state of the database is a value defined by the set of facts in effect at a given moment in time."
Rich then outlines how these principles are realised in the original design for Datomic (now "Datomic On-Prem") and this is where Crux and Datomic begin to diverge:
Datomic maintains a global index which can be lazily retrieved by peers from shared "storage". Conversely, a Crux node represents an isolated coupling of local storage and local indexing components together with the query engine. Crux nodes are therefore fully independent asides from the shared transaction log and document log
Both systems rely on existing storage technologies for the primary storage of data. Datomic’s covering indexes are stored in a shared storage service with multiple back-end options. Crux, when used with Kafka, uses basic Kafka topics as the primary distributed store for content and transaction logs.
Datomic peers lazily read from the global index and therefore automatically cache their dynamic working sets. Crux does not use a global index and currently does not offer any node-level sharding either so each node must contain the full database. Crux will support manual node-level sharding in the future via simple configuration. One benefit of manual sharding is that both the size of the Crux node on disk and the long-tail query latency will be more predictable
Datomic uses an explicit "transactor" component, whereas the role of the transactor in Crux is fulfilled by a passive transaction log (e.g. a single-partition Kafka topic) where unconfirmed transactions are optimistically appended, and therefore a transaction in Crux is not confirmed until a node reads from the transaction log and confirms it locally
Datomic’s transactions and transaction functions are processed via a centralised transactor which can be configured for High-Availability using standby transactors. Centralised execution of transaction functions is effectively an optimisation that is useful for managing contention whilst minimising external complexity, and the trade-off is that the use of transaction functions will ultimately impact the serialised transaction throughput of the entire system. Crux does not currently provide a standard means of creating transaction functions but it is an area we are keen to see explored. If transaction functions and other kinds of validations of constraints are needed then it is recommended to use a gatekeeper pattern which involves electing a primary Crux node (e.g. using ZooKeeper) to execute transactions against, thereby creating a similar effect to Datomic’s transactor component
Other differences compared to Crux:
Datomic’s datom model provides a very granular and comprehensive interface for expressing novelty through the assertion and retraction of facts. Crux instead uses documents (i.e. schemaless EDN maps) which are atomically ingested and processed as groups of facts that correspond to top-level fields with each document. This design choice simplifies bitemporal indexing (i.e. the use of valid time + transaction time coordinates) whilst satisfying typical requirements and improving the ergonomics of integration with other document-oriented systems. Additionally, the ordering of fields using the same key in a document is naturally preserved and can be readily retrieved, whereas Datomic requires explicit modelling of order for cardinality-many attributes. The main downside of Crux’s document model is that re-transacting entire documents to update a single field can be considered inefficient, but this could be mitigated using lower-level compression techniques and content-addressable storage. Retractions in are implicit and deleted documents are simply replaced with empty documents
Datomic enforces a simple information schema for attributes including explicit reference types and cardinality constraints. Crux is schemaless as we believe that schema should be optional and be implemented as higher level "decorators" using a spectrum of schema-on-read and/or schema-on write designs. Since Crux does not track any reference types for attributes, Datalog queries simply attempt to evaluate and navigate attributes as reference types during execution
Datomic’s Datalog query language is more featureful and has more built-in operations than Crux’s equivalent, however Crux also returns results lazily and can spill to disk when sorting large result sets. Both systems provide powerful graph query possibilities
Note that Datomic Cloud is separate technology platform that is designed from the ground up to run on AWS and it is out of scope for this comparison.
In summary, Datomic (On-Prem) is a proven technology with a well-reasoned information model and sophisticated approach to scaling. Crux offloads primary scaling concerns to distributed log storage systems like Kafka (following the "unbundled" architecture) and to standard operational features within platforms like Kubernetes (e.g. snapshotting of nodes with pre-built indexes for rapid horizontal scaling). Unlike Datomic, Crux is document-centric and uses a bitemporal information model to enable business-level use of time-travel queries.
- Is Crux eventually consistent? Strongly consistent? Or something else?
An easy answer is that Crux is "strongly consistent" with ACID semantics.
- What consistency does Crux provide?
A Crux ClusterNode system provides sequential consistency by default due to the use of a single unpartitioned Kafka topic for the transaction log. Transactions are executed non-interleaved (i.e. a serial schedule) on every Crux node independently. Being able to read your writes when using the HTTP interface requires stickiness to a particular node. For a cluster of nodes to be linearizable as a whole would require that every node always sees the result of every transaction immediately after it is written. This could be achieved with the cost of non-trivial additional latency. Further reading: http://www.bailis.org/papers/hat-vldb2014.pdf, https://jepsen.io/consistency/models/sequential
- How is consistency provided by Crux?
Crux does not try to enforce consistency among nodes, which all consume the log in the same order, but nodes may be at different points. A client using the same node will have a consistent view. Reading your own writes can be achieved by providing the transaction time Kafka assigned to the submitted transaction, which is returned in a promise from
crux.tx/submit-tx, in the call to
crux.query/db. This will block until this transaction time has been seen by the cluster node.
Write consistency across nodes is provided via the
:crux.db/casoperation. The user needs to attempt to perform a CAS, then wait for the transaction time (as above), and check that the entity got updated. More advanced algorithms can be built on top of this. As mentioned above, all CAS operations in a transaction must pass their pre-condition check for the transaction to proceed and get indexed, which enables one to enforce consistency across documents. There is currently no way to check if a transaction got aborted, apart from checking if the write succeeded.
- Will a lack of schema lead to confusion?
It of course depends.
While Crux does not enforce a schema, the user may do so in a layer above to achieve the semantics of schema-on-read (per node) and schema-on-write (via a gateway node). Crux only requires that the data can be represented as valid EDN documents. Data ingested from different systems can still be assigned qualified keys, which does not require a shared schema to be defined while still avoiding collision. Defining such a common schema up front might be prohibitive and Crux instead aims to enable exploration of the data from different sources early. This exploration can also help discover and define the common schema of interest.
Enforcing constraints on the data to avoid indexing all attributes can be done with
crux.index.ValueToBytes. Crux does this internally via
crux.index.ValueToBytesfor strings for example, only indexing the full string with range query support up to 128 characters, and as an opaque hash above that limit. Returning an empty byte array does not index a value. We are aiming to extend this to give further control over what to index. This is useful both to increase throughput and to save disk space. A smaller index also leads to more efficient queries.
- How does Crux deal with time?
The valid time can be set manually per transaction operation, and might already be defined by an upstream system before reaching Crux. This also allows to deal with integration concerns like when a message queue is down and data arrives later than it should.
If not set, Crux defaults valid time to the transaction time, which is the
LogAppendTimeassigned by the Kafka broker to the transaction record. This time is taken from the local clock of the Kafka broker, which acts as the master wall clock time.
Crux does not rely on clock synchronisation or try to make any guarantees about valid time. Assigning valid time manually needs to be done with care, as there has to be either a clear owner of the clock, or that the exact valid time ordering between different nodes doesn’t strictly matter for the data where it’s used. NTP can mitigate this, potentially to an acceptable degree, but it cannot fully guarantee ordering between nodes.
- Does Crux support RDF/SPARQL?
No. We have a simple ingestion mechanism for RDF data in
crux.rdfbut this is not a core feature. There is a also a query translator for a subset of SPARQL. RDF and SPARQL support could eventually be written as a layer on top of Crux as a module, but there are no plans for this by the core team.
- Does Crux provide transaction functions?
Not directly, currently. As the log is ingested in the same order at all nodes, purely functional transformations of the tx-ops are possible. The current transaction operations are implemented via a multi-method,
crux.tx/tx-commandwhich is possible to extend with further implementations. To make this work the spec
:crux.tx/tx-opalso needs to be extended to accept the new operation. A transaction command returns a map containing the keys
:post-condition-fn(the functions are optional). Alternatively you may use a "gatekeeper" pattern to enforce the desired level of transaction function consistency required.
- Does Crux support the full Datomic/DataScript dialect of Datalog?
No. There is no support for Datomic’s built-in functions, or for accessing the log and history directly. There is also no support for variable bindings or multiple source vars.
Other differences include that
:args, which is a relation represented as a list of maps which is joined with the query, are being provided in the same query map as the
:whereclause. Crux additionally supports the built-in
==for unification as well as the
!=. Both these unification operators can also take sets of literals as arguments, requiring at least one to match, which is basically a form of or.
Many of these aspects may be subject to change, but compatibility with other Datalog databases is not a goal for Crux.
- Any plans for Datalog, Cypher, Gremlin or SPARQL support?
The goal is to support different languages, and decouple the query engine from its syntax, but this is not currently the case. There is a query translator for a subset of SPARQL in
- Does Crux support sharding?
Not currently. We are considering support for sharding the document topic as this would allow nodes to easily consume only the documents they are interested in. At the moment the
tx-topicmust use a single partition to guarantee transaction ordering. We are also considering support for sharding this topic via partitioning or by adding more transaction topics. Each partition / topic would have its own independent time line, but Crux would still support for cross shard queries. Sharding is mainly useful to increase throughput.
- Does Crux support pull expressions?
No. As each Crux node is its own document store, the documents are local to the query node and can easily be accessed directly via the lower level read operations. We aim to make this more convenient soon.
We are also considering support for remote document stores via the
crux.db.ObjectStoreinterface, mainly to support larger data sets, but there would still be a local cache. The indexes would stay local as this is key to efficient queries.
- Do you have any benchmarks?
We are releasing a public benchmark dashboard in the near future. In the meantime feel free to run your own local tests using the scripts in the
/testdirectory. The RocksDB project has performed some impressive benchmarks which give a strong sense of how large a single Crux node backed by RocksDB can confidently scale to. LMDB is generally faster for reads and RocksDB is generally faster for writes.