(api/start-node
{:crux.node/topology ['crux.standalone/topology
'crux.kv.rocksdb/kv-store-with-metrics
'crux.metrics/with-prometheus-http-exporter]
:crux.kv/db-dir "data/db-dir-1"
:crux.standalone/event-log-kv-store 'crux.kv.rocksdb/kv
:crux.standalone/event-log-dir "data/eventlog-1"})
Metrics metrics and more metrics
Know what your nodes are up to

In the new 1.7 alpha release of Crux there is a LOT of content. Some bug fixes, some refactoring and some new features, including the addition of metrics!
Metrics give you information on how long your queries are taking, how fast/many documents are being ingested and how large your local store is (amongst other things).
In the first release of this feature we have included metrics for several key components:
-
query engine
-
indexer
-
RocksDB
Dropwizard’s metrics library is used to create a registry that can then be passed around to reporters that expose the inner metrics. Currently we provide 5 components to expose metrics:
-
Console output
-
CSV file
-
Prometheus http-reporter and exporter
We’ve made metrics as easy as possible to add to existing Crux nodes. Below is
an example node with a RocksDB backend which displays prometheus metrics to the
default port 8080. RocksDB metrics have also been included here with the
kv-store
item in the topology map.
Besides pointing at the endpoint in your prometheus config, that’s all that’s required.
In 1.7 we’ve changed the way |
Cloudwatch is just as easy.
(api/start-node
{:crux.node/topology ['crux.standalone/topology
;; this time without RocksDB metrics
'crux.kv.rocksdb/kv-store
'crux.metrics/with-cloudwatch]
:crux.kv/db-dir "data/db-dir-1"
:crux.standalone/event-log-kv-store 'crux.kv.rocksdb/kv
:crux.standalone/event-log-dir "data/eventlog-1"})
When run on ECS the AWS api is able to detect the relevant credentials to upload metrics to cloudwatch. If running locally we also provide options to specify the desired region to upload and also to provide valid credentials.
In the most recent Crux showcase I demonstrated this in action working with our new benchmark environment running on AWS (Skip to 11 mins for metrics).
For more configurations and options take a look at the docs.
If you need any more help, give us a shout!