External Storage


Simulate simulations are composed of multiple worker processes (and a single global state process to be shared between them if required). Each simulation process executes the user-provided simulation code, which can contain direct calls to the external db service. This works well for reads, but having multiple processes requires additional synchronization to avoid workers overwriting one another for entities moving between them. For this purpose, Simulate provides the external_storage API, but it is possible to continue writing to the database from the simulation code if the synchronisation is not required.
Reading and Writing to an External Database from the Simulation Code
To execute a query that writes or reads directly from the database, connect it inside the standard simulation callback:
  • initialize_cell callback on cell creation
  • initialize_world callback on first cell creation
  • cell_tick on simulation update loop
It is possible to execute read and writes as long as the database access code is asynchronous and unblocks the simulation. (Note blocking isn't fatal, but can cause bad performance.) An example of wrapping synchronous database calls in asynchronous code can be found in the SDK demo.

Hadean Simulate's External Storage API for Workers

Control Flow of the External Storage API

The external storage api (yellow) uses a producer/consumer pattern, where the worker simulation thread returns an object to be saved by the worker background storage thread.
The manager synchronises this process across all the worker and global state processes, only allowing the tick_data function to be called when the data returned for the previous tick has finished serialising (external_storage_base::store() has returned on all workers. This scheme prevents data from backing up inside the persistence layer, or being overwritten on another worker. The database reads are done directly from the Simulate's simulation hooks using asynchronous method calls to unblock the simulation.


Define Types Used for External Storage
// the data object which contains the data to be exported to the db for each tick
struct tick_data {
std::set<uint64_t> data;
// the worker object which will be used to write the tick data to the external storage
class worker_storage : public aether::external_storage_base<tick_data> {
std::unique_ptr<dbConnection> db;
worker_storage(get_data_fn_type _get_data) :
aether::external_storage_base<tick_data>(_get_data) {
// construct the context for the store method in order to reuse the connection objects between store() calls
~worker_storage() {
// cleanup if any
// callback called when the simulation provides data to store
void store(const aether::storage_state& storage_state, tick_data&& data) override {
// do your database writes here
// you can use aether_state.get_tick(); to “tickstamp” the stored data
// this is useful for implementing an append-only data model
uint64_t tick_id = storage_state.get_tick();
// you can use get_completed_external_storage_tick() to access the tick id for which the storage is completed on all the workers
// this can be used to mark the tick as complete when using multiple workers
uint64_t completed_tick_id = storage_state.get_completed_external_storage_tick();
// callback called when a db write has been completed on any of the workers - optional
void store_tick_data_ack(uint64 tick_id) override {
// if you use an append-based data model you can use this callback to write the most recently completed tick to the database.
// Otherwise this callback isn’t of much use and can be ignored
Override the Storage Builder Function
// define a function which will return the data to be saved for a given tick
// called every tick, but only if storing previous ticks was completed on all workers
std::optional<tick_data> data_for_tick(const aether::storage_state &storage_state, user_cell_state &state) {
// return the tick_data object or empty option if there’s nothing to persist this tick
// define a builder which constructs our worker_storage object and passes the data_for_tick to it
auto user_cell_state_impl::build_external_storage(const aether_state_type &aether_state) -> std::unique_ptr<aether::external_storage_iface> {
auto &user_state = get_store();
return std::make_unique<worker_storage>([&](const aether::storage_state& state){
return data_for_tick(state, user_state);

Simulate’s External Storage API (for global_state)

The global state process sends and receives messages from all the workers. The external storage for global state works the same way as the worker external storage:
The only difference in the api is the registration of the external_storage_base implementation.
class global_storage : public aether::external_storage_base<global_tick_data> { /*...*/ }
class my_global_state : public aether::global_state_base {
// define a function which will return the data to be saved for a given tick
// called every tick, but only if storing previous ticks was completed on all workers
std::optional<global_tick_data> data_for_tick(const aether::storage_state&) {
// define a builder which constructs our global_storage object and passes the data_for_tick method to it
virtual std::unique_ptr<aether::external_storage_iface> build_external_storage() override {
return std::make_unique<global_storage>([&](const aether::storage_state& storage_state) {
return data_for_tick(storage_state);

DB Integration and Data Model Design

Depending on the scenario and the exact database, there are a number of example designs for integrating Simulate.

Multiple Database Writers Updating Data In-Place

This is the most straightforward approach: every process handles its own updates in a private transaction. This approach is the most performant one, but it doesn’t provide consistency guarantees as it is possible to connect mid-update with only some transactions already committed by the workers.

Multiple Database Writers Using Distributed Database Transactions

Using distributed transactions ensures updating in-place does not have consistency issues. The exact details vary from database to database, but usually a two phase commit is involved.

Multiple Database Writers with Append-Based Data Model

Append-based data models are more complicated than update in-place models, but they can provide consistent reads without having to use distributed transactions. Consistency with this approach is achieved by storing the timestamp (or “tickstamp” when using ticks) for each update and a global counter which marks the last finished transaction.
To support this approach, Simulate provides the external_storage_base::store_tick_data_ack callback which is called when external_storage_base::store() is completed for all the workers for a particular tick. It is also possible to access both the current tick and the tick of the last finished transaction in the external_storage.tick_data callback by accessing aether::storage_state.get_tick() and aether::storage_state.get_completed_external_storage_tick().
The two most popular approaches for making an append-based data model are:
  • Adding a “tickstamp” to every key of the equivalent update in-place data model and running garbage collection on old entries
  • Event sourcing

Single Database Writer in the Global State Process

This is the suggested approach for using databases that do not support concurrent writers. By limiting the database writes to be from a single process, contention between transactions is no longer a concern since all updates are performed sequentially. However with this approach all workers communicate with a single process forming a N to 1 pattern that, depending on the amounts of data transmitted, may not be suitable if the number N of workers is too big. In such cases a database that supports concurrent writers must be used and workers should interact with it independently.