BonsaiDb is a developer-friendly document database for Rust that grows with you. It offers many features out of the box that many developers need:
- ACID-compliant, transactional storage of Collections
- Atomic Key-Value storage with configurable delayed persistence (similar to Redis)
- At-rest Encryption
- Backup/Restore
- Role-Based Access Control (RBAC)
- Local-only access, networked access via QUIC, or networked access via WebSockets
- And much more.
BonsaiDb is considered alpha software. It is under active development (). There may still be bugs that result in data loss. All users should regularly back up their data and test that restoring from backup works correctly.
Around May 2022, a bug and a mistake in benchmarking were discovered. The bug was promptly fixed, but the net result is that BonsaiDb's transactional write performance is significantly slower than other databases. Unless you're buliding a very write-heavy application, the performance will likely still be acceptable. Issue #251 on GitHub is where progress of the performance updates are being tracked. From a developer's perspective, migration is expected to be painless beyond the IO needed to copy the old database into the new format.
To get an idea of how it works, let's review the view-examples
example.
See the examples README for a list of all available examples.
The view-examples
example shows how to define a simple schema containing a single collection (Shape
), a view to query the Shape
s by their number_of_sides
(ShapesByNumberOfSides
), and demonstrates multiple ways to query that view.
First, here's how the schema is defined:
#[derive(Debug, Serialize, Deserialize, Collection)]
#[collection(name = "shapes", views = [ShapesByNumberOfSides])]
struct Shape {
pub sides: u32,
}
#[derive(Debug, Clone, View)]
#[view(collection = Shape, key = u32, value = usize, name = "by-number-of-sides")]
struct ShapesByNumberOfSides;
impl CollectionViewSchema for ShapesByNumberOfSides {
type View = Self;
fn map(&self, document: CollectionDocument<Shape>) -> ViewMapResult<Self::View> {
document
.header
.emit_key_and_value(document.contents.sides, 1)
}
fn reduce(
&self,
mappings: &[ViewMappedValue<Self>],
_rereduce: bool,
) -> ReduceResult<Self::View> {
Ok(mappings.iter().map(|m| m.value).sum())
}
}
After you have your collection(s) and view(s) defined, you can open up a database and insert documents:
let db = Database::open::<Shape>(StorageConfiguration::new("view-examples.bonsaidb"))?;
// Insert a new document into the Shape collection.
Shape { sides: 3 }.push_into(&db)?;
And query data using the Map-Reduce-powered view:
let triangles = ShapesByNumberOfSides::entries(&db).with_key(&3).query()?;
println!("Number of triangles: {}", triangles.len());
You can review the full example in the repository, or see all available examples in the examples README.
Our user's guide is early in development, but is available at: https://dev.bonsaidb.io/main/guide/
While this project is alpha, we are actively adopting the current version of
Rust. The current minimum version is 1.64
.
No feature flags are enabled by default in the bonsaidb
crate. This is
because in most Rust executables, you will only need a subset of the
functionality. If you'd prefer to enable everything, you can use the full
feature:
[dependencies]
bonsaidb = { version = "*", features = "full" }
full
: Enables the features below andlocal-full
,server-full
, andclient-full
.cli
: Enables thebonsaidb
executable.files
: Enables file storage support withbonsaidb-files
password-hashing
: Enables the ability to use password authentication using Argon2 viaAnyConnection
.token-authentication
: Enables the ability to authenticate using authentication tokens, which are similar to API keys.
All other feature flags, listed below, affect each crate individually, but can be safely combined.
[dependencies]
bonsaidb = { version = "*", features = "local-full" }
All Cargo features that affect local databases:
local-full
: Enables all the flags belowlocal
: Enables thelocal
module, which re-exports the cratebonsaidb-local
.async
: Enables async support with Tokio.cli
: Enables theclap
structures for embedding database management commands into your own command-line interface.compression
: Enables support for compressed storage using lz4.encryption
: Enables at-rest encryption.instrument
: Enables instrumenting withtracing
.password-hashing
: Enables the ability to use password authentication using Argon2.token-authentication
: Enables the ability to authenticate using authentication tokens, which are similar to API keys.
[dependencies]
bonsaidb = { version = "*", features = "server-full" }
All Cargo features that affect networked servers:
server-full
: Enables all the flags below,server
: Enables theserver
module, which re-exports the cratebonsaidb-server
.acme
: Enables automtic certificate acquisition through ACME/LetsEncrypt.cli
: Enables thecli
module.compression
: Enables support for compressed storage using lz4.encryption
: Enables at-rest encryption.hyper
: Enables convenience functions for upgrading websockets usinghyper
.instrument
: Enables instrumenting withtracing
.pem
: Enables the ability to install a certificate using the PEM format.websockets
: EnablesWebSocket
support.password-hashing
: Enables the ability to use password authentication using Argon2.token-authentication
: Enables the ability to authenticate using authentication tokens, which are similar to API keys.
[dependencies]
bonsaidb = { version = "*", features = "client-full" }
All Cargo features that affect networked clients:
client-full
: Enables all flags below.client
: Enables theclient
module, which re-exports the cratebonsaidb-client
.trusted-dns
: Enables using trust-dns for DNS resolution. If not enabled, all DNS resolution is done with the OS's default name resolver.websockets
: EnablesWebSocket
support forbonsaidb-client
.password-hashing
: Enables the ability to use password authentication using Argon2.token-authentication
: Enables the ability to authenticate using authentication tokens, which are similar to API keys.
Unless there is a good reason not to, every feature in BonsaiDb should have
thorough unit tests. Many tests are implemented in bonsaidb_core::test_util
via a macro that allows the suite to run using various methods of accessing
BonsaiDb.
Some features aren't able to be tested using the Connection
,
StorageConnection
, KeyValue
, and PubSub
traits only. If that's the case,
you should add tests to whichever crates makes the most sense to test the code.
For example, if it's a feature that only can be used in bonsaidb-server
, the
test should be somewhere in the bonsaidb-server
crate.
Tests that require both a client and server can be added to the core-suite
test file in the bonsaidb
crate.
We use clippy
to give additional guidance on our code. Clippy should always return with no errors, regardless of feature flags being enabled:
cargo clippy --all-features
Our CI processes require that some commands succeed without warnings or errors. These checks can be performed manually by running:
cargo xtask test --fail-on-warnings
Or, if you would like to run all these checks before each commit, you can install the check as a pre-commit hook:
cargo xtask install-pre-commit-hook
We have a custom rustfmt configuration that enables several options only available in nightly builds:
cargo +nightly fmt
This project, like all projects from Khonsu Labs, are open-source. This repository is available under the MIT License or the Apache License 2.0.
To learn more about contributing, please see CONTRIBUTING.md.