Deploying an application to the cloud

Create and manage remote clusters for deploying an application


Make sure that you have logged into both the az CLI and hadean CLI by following the Prerequisites and Installing the SDK guides.

Creating and using a Hadean Platform cluster

There are four steps to getting an application running on a Hadean cluster:

1. Create a cluster

Use the hadean cluster -n <name> create command to create a new cluster with a desired name:

You need to specify the region to create your cluster in, see Configuring your application.

hadean cluster --name demo create --location <location>




--name | -n

Specifies the name of the cluster to create


Sets the location to provision the cluster in (see Configuring your application)

2. Wait for your cluster to provision

The hadean cluster create command will provision a new cluster in the location specified. You can check the status of your cluster(s) with the hadean cluster list command.

You'll need to wait until your cluster is in the status Ready and this typically takes less than 5 minutes. You can watch the list command to avoid typing it in repeatedly.

watch -n 5 hadean cluster list

3. Deploy an application

Once logged in you can deploy an application to the cluster - in this case we'll use the hello demo - by using the hadean cluster deploy command; pass the name of your cluster and the path to the application as arguments.

A cluster can only have one deployed application at a time and any time you modify an application you will need to deploy it again to see those changes in the remote cluster.

hadean cluster -n demo deploy ~/hadean/examples/rust/hello/target/release/hello

4. Run the application remotely

With the application deployed you can now run it on the cluster.

To start the application use the hadean cluster run command; pass the name of your cluster and the path to the config file, see Configuring your application for details.

hadean cluster -n demo run ~/hadean/examples/configs/config.toml

Immediately after using the run command, you'll be asked to perform an azure device login. This login is used to get credentials onto the cluster for dynamically creating virtual machines from the scheduler. If you need to change the subscription this uses, refer to Hints, Tips, and Troubleshooting.

This command will produce output both form the platform and your application, here's a walk through of what you'll see when you run the application:

Uploading config...
Starting application...
Streaming application logs...

The configuration is uploaded to the cluster and the application is run. At this point, unless you passed the --detached flag, the run command will automatically start streaming application logs back from the cluster.

{"timestamp":"2021-09-13T10:32:30.349033+00:00","message":"Metrics will be stored in: /tmp/hadeanos, which isn't a tmpfs. Exporting metrics might cause performance problems.","module_path":"framework::metrics","file":"framework/src/","line":40,"level":"WARN","target":"framework::metrics","thread":"main","pid":".2754","thread_id":139834448954672,"mdc":{"process_id":"139834448954672","process_name":"hadean"}}
{"timestamp":"2021-09-13T10:32:30.352329+00:00","message":"dynamic backend - creating...","module_path":"scheduler_backend::types::terraform","file":"scheduler/backend/src/types/","line":282,"level":"INFO","target":"scheduler_backend::types::terraform","thread":"main","pid":".2764","thread_id":140303789859408,"mdc":{"process_id":"140303789859408","process_name":"dynamic-plugin"}}

At this point the dynamic backend needs to provision a machine for the application. To do this, it must first create a resource group and storage account to store terraform state within. This is typically quite fast, but it can sometimes fail. If this does fail, see our Hints, Tips, and Troubleshooting guide for more information.

{"timestamp":"2021-09-13T10:32:45.984990+00:00","message":"dynamic backend - creation successful","module_path":"scheduler_backend::types::terraform","file":"scheduler/backend/src/types/","line":323,"level":"INFO","target":"scheduler_backend::types::terraform","thread":"main","pid":".2764","thread_id":140303789859408,"mdc":{"process_id":"140303789859408","process_name":"dynamic-plugin"}}
{"timestamp":"2021-09-13T10:32:46.005499+00:00","message":"dynamic backend - getting manager","module_path":"scheduler_backend::types::terraform","file":"scheduler/backend/src/types/","line":340,"level":"INFO","target":"scheduler_backend::types::terraform","thread":"main","pid":".2764","thread_id":140303789859408,"mdc":{"process_id":"140303789859408","process_name":"dynamic-plugin"}}
{"timestamp":"2021-09-13T10:32:46.005564+00:00","message":"unsupported attribute in locus: platform","module_path":"scheduler_backend::types::terraform","file":"scheduler/backend/src/types/","line":145,"level":"WARN","target":"scheduler_backend::types::terraform","thread":"main","pid":".2764","thread_id":140303789859408,"mdc":{"process_id":"140303789859408","process_name":"dynamic-plugin"}}

Next, the dynamic backend must scale up machines to run your application. In Azure, machine scale up typically happens in less than 5 minutes. Note that if you run repeatedly and you have your standby_machines set to greater than zero, you won't see this start up cost on every run. See Configuring your application for more information on how your configuration impacts scaling.

PLAY [Provision machine.] ******************************************************
TASK [Wait for SSH connection] *************************************************
ok: []
PLAY RECAP ********************************************************************* : ok=8 changed=7 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0

Once a machine is available, ansible is used to provision the machine with the manager and user binary as described in Creating Hadean Platform applications.

{"timestamp":"2021-09-13T10:35:31.781429+00:00","message":"dynamic backend - getting manager was successful","module_path":"scheduler_backend::types::terraform","file":"scheduler/backend/src/types/","line":414,"level":"INFO","target":"scheduler_backend::types::terraform","thread":"main","pid":".2764","thread_id":140303789859408,"mdc":{"process_id":"140303789859408","process_name":"dynamic-plugin"}}
{"timestamp":"2021-09-13T10:35:31.781583+00:00","message":"dynamic backend - destroy task - started destruction request listener","module_path":"scheduler_backend::types::terraform","file":"scheduler/backend/src/types/","line":393,"level":"INFO","target":"scheduler_backend::types::terraform","thread":"main","pid":".2764","thread_id":140303789859408,"mdc":{"process_id":"140303789859408","process_name":"dynamic-plugin"}}
{"timestamp":"2021-09-13T10:35:31.788461+00:00","message":"Reserving resources for (before: cores=8,memory=16397408 KiB; after: cores=127/16,memory=16389216 KiB; delta: cores=1/16,memory=8 MiB)","module_path":"scheduler_server::manager","file":"scheduler/server/src/","line":180,"level":"INFO","target":"scheduler_server::manager","thread":"tokio-runtime-worker","pid":".2754","thread_id":139834431965984,"mdc":{"process_id":"139834431965984","process_name":"tokio-runtime-w"}}
{"timestamp":"2021-09-13T10:35:31.797131+00:00","message":"Switching logging to use the global logger on Manager Manager { pid:, free_resources: Resources { cores: Ratio { numer: 127, denom: 16 }, memory: Bytes { bytes: 16782557184 } }, reserved_resources: Resources { cores: Ratio { numer: 1, denom: 16 }, memory: Bytes { bytes: 8388608 } }, total_resources: Resources { cores: Ratio { numer: 8, denom: 1 }, memory: Bytes { bytes: 16790945792 } }, sender: … }","module_path":"scheduler_server","file":"scheduler/server/src/","line":895,"level":"INFO","target":"scheduler_server","thread":"main","pid":".2754","thread_id":139834448954672,"mdc":{"process_id":"139834448954672","process_name":"hadean"}}

The dynamic backend then connects to the manager and allocates resources for the application. entry parent start entry child start child received: parent parent received: child parent end child end

At this point you'll see the logs from your application. In this case, we've run the hello example. So we can see the output from that program coming from the cluster.

{"timestamp":"2021-09-13T10:35:32.114437+00:00","message":"Switched logging to use local logging","module_path":"scheduler_server","file":"scheduler/server/src/","line":861,"level":"INFO","target":"scheduler_server","thread":"tokio-runtime-worker","pid":".2754","thread_id":139834444696352,"mdc":{"process_id":"139834444696352","process_name":"tokio-runtime-w"}}
{"timestamp":"2021-09-13T10:38:03.140601+00:00","message":"dynamic backend - destroy task - machine destruction was successful (identifier: 2)","module_path":"scheduler_backend::types::terraform","file":"scheduler/backend/src/types/","line":403,"level":"INFO","target":"scheduler_backend::types::terraform","thread":"main","pid":".2764","thread_id":140303789859408,"mdc":{"process_id":"140303789859408","process_name":"dynamic-plugin"}}
{"timestamp":"2021-09-13T10:38:03.140900+00:00","message":"all managers have been terminated","module_path":"scheduler_server","file":"scheduler/server/src/","line":1102,"level":"INFO","target":"scheduler_server","thread":"tokio-runtime-worker","pid":".2754","thread_id":139834444696352,"mdc":{"process_id":"139834444696352","process_name":"tokio-runtime-w"}}

After the application exits, if your application used more resources than is specified in your standby_machines in your application config, the scheduler will down-scale your resources that are not in use. After this down scaling, exactly as many VMs as is specified in your config will remain.

After doing a run the time configured in machines_timeout will be used to destroy all standby machines if no new application is run within that time. This allows you to iterate rapidly without leaving resources around.

Clean up

Currently we do not provide an automated way to remove cluster resources deployed into the cloud. To do this you will need to identify the resource groups and delete them from the Azure portal. When a cluster is created a resource group is created which is prefixed with hadean- and contains your cluster name. Additionally for dynamic scaling a second resource group is created with the prefix: hadean-dynamic

The scheduler will automatically scale down resources when applications are not running (to the minimum specified in standby-machines in your config), so you only need to clean up resources manually when you are finished with the scheduler itself.