Crux Development Diary Ep. 4

Unlocking New Futures

Crux Development Diary Ep. 4

June 18, 2020
Jeremy Taylor

Welcome to a new instalment of our development journey…

Crux is a database that prioritises flexibility above all else:

  • flexibility for system architects to mix & match storage technologies that align with the requirements, expertise and budget

  • flexibility for data architects to capture and accommodate evolving business domains after-the-fact

  • flexibility for developers to efficiently combine their code with point-in-time graph queries, embeddable directly within a JVM application

However, our most crucial measure of flexibility is the core development team’s ability to evolve Crux itself and support increasingly complex requirements without sacrificing internal code simplicity or dramatically altering the underlying designs. To this end we have been very busy since the previous diary entry.

The fully-remote Crux team has been humming along unencumbered these past few months, and consequently, since the 1.6 release there have been 6 new releases including many useful user-facing changes:

  • Rationalised history APIs 1.8.3

  • Document Store protocol with S3 module 1.8.3

  • Event hooks for transaction listeners 1.8.3

  • Simplified "lazy" API usage 1.8.2

  • Monitoring metrics with support for JMX, AWS CloudWatch, Prometheus and more 1.7.0

Yesterday we released 1.9.0 which is a meaningful milestone for feature and API stability. We have implemented a variety of internal refactorings during this period that have provided substantial performance and storage improvements to the indexes and query engine. For certain use-cases where documents are frequently modified, we have reduced our overall disk space used for indexes by between 45-60% against 1.8.4. Our optimisations in the query engine have resulted in a 25-30% improvement over a reasonable subset of the WatDiv query suite in our nightly benchmarks.

The 1.9 release also introduces many new features, most notably including Transaction Functions which have unlocked a whole new layer of architectural flexibility for users. See the detailed release notes for the full details of 1.9, but let’s look at these significant new features from the perspective of the original requirements.

Transaction Functions

Requirement: Users can express changes to the database with more advanced control and granularity than using basic document-oriented transaction operations

Until now, all data has been submitted to Crux using native put operations which operate using entire documents. Therefore you cannot put half a document and equally you cannot delete a single attribute from a document. However, it is a common expectation and requirement to be able to update one or more documents based on the current values contained within the previous versions of those documents.

The classic example is maintaining a simple monotonically incrementing counter. The existing options for creating such a counter with Crux have been:

  1. Optimistically put a document with the new count value, e.g. {:crux.db/id :my-counter :count <n + 1>}, whilst also using a match operation to ensure that the current document version of :my-counter still looks as expected, e.g. {:crux.db/id :my-counter :count <n>} (this is necessary to maintain consistency and prevent race-conditions between concurrent transactions)

  2. Funnel all transaction submissions through a single "gatekeeper" node and use something like Zookeeper to handle failover

The first option is less than ideal, since it requires excess data and churn on the transaction log, particularly where multiple nodes may be writing transactions to update the counter at the same time. The risks of contention and reduced throughput are a major downside, as well as the complexity in the user code needed to implement backoff & retry logic.

The second option is still a reasonable choice but generally runs against the grain of the Crux philosophy, because centralising authority at a single node for writing to the transaction log undermines the scalability benefits of using clustered technology like Kafka for highly-available and durable write throughput.

Our solution to this problem broadly eliminates the need for any kinds of gatekeeper nodes and has unlocked a whole new layer of possibility in terms of transactional architecture and data modelling. We have designed a feature for expressing custom transaction operations inside of your transactions. These user-supplied function operations are called "Transaction Functions" and they are invoked deterministically during the initial indexing of the transaction log.

The use of transaction functions also simplifies the contents of the transaction log by more explicitly capturing the intent of the operations.

(crux/submit-tx node [[:crux.tx/fn :increment-counter :my-counter]])

Requirement: Users can use transaction functions to conditionally update the database atomically based on custom logic with query access to the current database via a provided context

Each transaction function is installed via a put operation and all invocation arguments are stored separately in the document store. Once invoked as an operation, a transaction function has access to a context against which you can run a query, and this is how you can update a counter based on its current value. The result of invoking a transaction function is a list of one or more operations which are spliced into the transaction to replace the calling operation.

[[:crux.tx/put {:crux.db/id :increment-counter
                :crux.db/fn '(fn [ctx eid]
                              (let [db (crux.api/db ctx)
                                    entity (crux.api/entity db eid)]
                                [[:crux.tx/put (update entity :count inc)]]))}]]

Nodes which are subsequently indexing the transaction log will not have to repeat this processing of the transaction function operations because the argument documents (to which the transaction log refers under-the-hood) are idempotently mutated and replaced with the resulting native operations. In other words, each transaction function invocation replaces itself with its result in the upstream document store, and this maintains consistency whilst not precluding later eviction operations on the data generated within the results.

Note that we also have a speculative transaction capability for transaction functions which we are working on right now (coming very soon!):

Requirement: Users can express constraints and invariants with Datalog inside of transaction functions

Some keen-eyed users will have already spotted that we previously implemented a variation of the transaction function feature and kept it hidden behind a feature flag. We decided to keep that version disabled by default because the operational design was largely incompatible with the use of eviction, where the combination of the two features could too easily lead to inconsistency, and so it was only appropriate for usage by a very narrow set of users. The new implementation conveniently avoids those problems by replacing the argument documents with the resulting operations.

Collections within queries

Requirement: Users can express complex queries more succinctly, e.g. where set literals can be used in a or v positions, and predicates can return sets

Queries that would previously require many additional clauses can now be compactly expressed thanks to treating collection literals and predicate return values as sets.

;; for example, with these documents submitted:
{:crux.db/id :hobbits, :members #{:frodo :sam :merry :pippin}}
{:crux.db/id :three-hunters, :members #{:aragorn :legolas :gimli}}

(crux/q db '{:find [?group], :where [[?group :members #{:frodo :aragorn}]]})
;; => #{[:hobbits] [:three-hunters]}

(crux/q db '{:find [?group], :where [[?group :members ?member]
                                     [(vector :frodo :aragorn) ?member]]})
;; => #{[:hobbits] [:three-hunters]}

(crux/q db '{:find [?member], :where [[#{:hobbits :three-hunters} :members ?member]]})
;; => #{[:frodo] [:sam] [:merry] [:pippin] [:aragorn] [:legolas] [:gimli]}

HTTP Server Security

Requirement: Users can configure their Crux HTTP topology to use JWT security

The HTTP server module and remote Clojure API client can now be configured to use JWT security. This allows for integration with authentication systems such as AWS Cognito to protect write and read access to a node.

Requirement: Users can configure their Crux HTTP topology to disable the submit-tx end point such that a given node is read-only

Various use-cases can benefit from read-only access to a Crux node, allowing users to freely share access to data stored within Crux without having to introduce reverse proxies or solve the authorisation problem at a lower level, such as using Kafka ACLs.

Deduplicated indexing of entity history

Requirement: Users can efficiently store multiple document versions against a single entity without incurring a storage penalty for identical attribute-value combinations

Even though storage is cheaper than ever, eliminating excessive storage usage is always a good idea. In the course of analysing disk usage more generally we identified that there are common scenarios where data across document versions only changes partially between historic and new versions. This is most clearly visible when modelling time-series data as a history of documents, where many attribute-value combinations are likely to be repeated. With the revamped index layout we have now considerably improved the performance seen in our nightly time-series benchmark tests.

Also in 1.9:

  • Built-in HTML UI for browsing through Crux data directly on the HTTP server

  • Module stability classifications

  • Removal of previously-deprecated APIs

  • For the full breakdown see the release notes

In the community

Crux received some unexpected interest on Hacker News, with some good discussion on the nature of document databases, event sourcing and schema-on-write.

The personalised recommendations platform is using Crux within its open-source, Firebase-like Clojure stack called Biff.

Elsewhere, we’ve spoken to teams writing transaction log and document store backends for Google Cloud Datastore, teams integrating Crux with Lucene, and teams integrating Crux with various Distributed Ledger Technologies. As ever, we really appreciate hearing about all the interesting things people are working on, so please keep us posted!

Exciting things ahead

  • Crux Live - sign-up to our newsletter on (see the footer) or keep an eye on our social channels for news about Crux’s first virtual mini-conference event

  • Speculative Transactions - as described in the Transaction Function section above

  • SQL Queries - powered by Apache Calcite, for ad-hoc queries that compile straight to Crux Datalog

  • JSON APIs - not everyone speaks edn yet

  • Scalability Benchmarks - a.k.a. let’s not look at next month’s AWS bill

Have a nice day!

Image Credits: © SpaceX