Blob Store

Overview

Blob store is a simple interface designed to download a binary blob into Simulate which can then be used by a worker.
The interface allows a worker to asynchronously ask for a binary object. When it is ready for use, a aether::container::span can be obtained.
There are three main pieces involved with the blob store (see aether/blob-store/blob_store.hh and aether/blob-store/backends.hh

Blob class

The blob class maintains a memory map of a shared memory object between the workers of the same machine. The is_valid() method can be used to determine if the blob contains valid data and get an error string in case it does not. The memory can be accessed using the get() method that returns a span.

Blob store

blob_store provides a single method get_blob which returns a future of a blob object. When called blob_store will try to open the blob from a shared memory location if available or fetch it from a server otherwise. The backend template is used to fetch the blob from the server in case it is not in memory yet. It has to implement a fetch function.

Backend

The backend class is the one used to connect to the server and download the blob. It has to implement a fetch function. Currently two different backends are implemented:
  • Http_store: connects anonymously to an http or https server and downloads a blob.
  • Aws_bucket: connects using security credentials to an amazon s3 bucket and downloads a blob from there.

Configuration

The configuration in the blob store is provided by the template class used to access the server. There are currently two implementations:

Https_store

Connects to an http or https server. It is configured using the constructor parameters:
  • host: address or name of the server
  • port: port to connect to
  • path: path of the blobs inside the server (optional), root folder by default
  • use_ssl: if true use ssl otherwise do not. off by default

Aws_bucket

Connects to an aws s3 bucket. It is configured using the constructor parameters:
  • bucket_name: name of the bucket
  • access_key_id: id of the key that will be used to access the bucket
  • secret_key: actual key to the bucket
  • region: region the connection is from

Extending with new back-ends

The functionality can easily be extended to connect to other servers using other protocols by implementing a new class with the fetch(std::string object_name) method and template the blob_store class with the new backend.

Walkthrough

In order to use it a class that implements the fetch function to be used to retrieve the asset from the store is needed. Whenever a worker wants to access a blob, this class is used to parametrize the static method blob_store<backend>::get_blob(backend store, std::string path); with it. The worker receives the future object that can be used to determine when the blob is ready.
Let us assume that we have a simulation that uses an asset stored in an amazon s3 bucket that needs to be accessed when an event from a client is received.
Whenever the event is received the call to the asset store is made. The call returns a future that we save in our implementation of user_cell_state_interface to be checked later on
void user_cell_state_impl::receive_messages(const aether_state_type& aether_state, message_reader_type& reader) {
using namespace aether::protocol::base;
while(auto maybe_message = reader.get_next()) {
// ...
if (event.type == FETCH_ASSET) {
// do nothing if we are already waiting
if (!user_data.future_request.valid()) {
aether::aws_bucket s3_bucket(name, key_id, key, region)
user_data.future_request = aether::blob_store<aether::aws_bucket>::get_blob(s3_bucket, "asset.bin");
}
}
// ...
}
}
in the cell tick it is checked when the blob is available for use:
void user_cell_state_impl::cell_tick(const aether_state_type &aether_state, float delta_time) {
if (user_data.future_request.valid() &&
user_data.future_request.wait_for(std::chrono::nanoseconds(0)) == std::future_status::ready) {
aether::blob asset = user_data.future_request.get();
if (asset.is_valid()) {
//use asset
}
}
// ...
}
Once the blob goes out of scope it will unmmap the shared memory and close it if it is the only process using it. If downloads by another worker wants to be avoided, the blob must not go out of scope. Thus will be available in shared memory for other workers in the same machine.