Deploying an application
Create and manage remote clusters for deploying an application
Make sure that you have logged into both the
az
CLI and hadean
CLI by following the Prerequisites and Installing the SDK guides.If you are using self-managed, pre-provisioned infrastructure, ensure you have followed the steps outlined in Configuring pre-provisioned infrastructure.
There are four steps to getting an application running on a Hadean cluster:
To create a cluster run:
hadean cluster -n <name> --cloud <provider> create
passing in the desired name of your new cluster the as well as the cloud provider which can be set to either azure or aws - alternatively, you can provide the path and filename of a configuration file (this is required for self-managed clusters).
You can set defaults for your cloud provider and location for new clusters.
See Configuring your application.
Azure clusters must simply specify a location to deploy to.
hadean cluster --name demo create --cloud azure --location <location>
For example, you can create a cluster in London using
hadean cluster --name demo create -l uksouth
. For other locations, read about configuring Azure location.AWS clusters must specify a region, availability zone, and a domain name to use for the deployed cluster.
The region, availability zone, and domain name parameters all have aliases.
Checkout
hadean cluster create --help
to see these options.hadean cluster --name demo create --cloud aws --region <region> --zone <zone> --domain <domain>
For example, you can create a cluster in London zone a using
hadean cluster --name demo create -l eu-west-2 --zone eu-west-2a --domain <domain>
.Most regions have three availability zones; a, b, and c. But you can also set a default availability zone just like you can for region.
On AWS, using your own domain name is currently a requirement. You'll need to have this setup in Route53. Learn more about this requirement.
Self-managed (pre-provisioned) clusters are defined in a toml configuration file. Pass the path to this file as an argument to the
hadean cluster create
command:hadean cluster create /home/me/configs/my-cluster.toml
If your toml configuration contains all required information (see example), no additional command line arguments are required.
Flag | Effect |
--name | -n | Specifies the name of the cluster to create |
--cloud-provider | --cloud | --provider | Picks the cloud provider ( azure / aws ) to deploy the cluster to |
--location | --region | -l | Sets the location to provision the cluster in |
--availability-zone | --zone | Sets the availability zone to use for the cluster (Only currently available in AWS) |
--domain-name | --domain | Names the exact registered domain to use for the cluster (Only currently available in AWS) |
Once created, licensed customers can see your cluster in the Hadean Portal, which will also show its status.
The
hadean cluster create
command will provision a new cluster in the location specified. You can check the status of your cluster(s) with the hadean cluster list
command.You'll need to wait until your cluster is in the status
Ready
and this typically takes less than 5 minutes. You can watch
the list
command to avoid typing it in repeatedly.watch -n 5 hadean cluster list
Once logged in you can deploy an application to the cluster - in this case we'll use the
hello
demo - by using the hadean cluster deploy
command; pass the name of your cluster and the path to the application as arguments.A cluster can only have one deployed application at a time and any time you modify an application you will need to deploy it again to see those changes in the remote cluster.
hadean cluster -n demo deploy ~/hadean/examples/rust/hello/target/release/hello
For applications that need other files to be deployed, such as data files or libraries, you can use the
--directory
argument to specify a directory containing other files to deploy along side your application.For example, if you have an application
robot
that depends on the libraries ./lib/arms.so
and ./lib/legs.so
, then you could upload the files in your lib directory:hadean cluster -n demo deploy ./target/robot --directory ./lib
The files inside of
./lib
will then end up in the working directory of your application.With the application deployed you can now run it on the cluster.
To start the application use the
hadean cluster run
command; pass the name of your cluster and the path to the config file, see Configuring your application for details.hadean cluster -n demo run ~/hadean/examples/configs/config.toml
This command will produce output both from the platform and your application, here's a walk through of what you'll see when you run the application:
Uploading config...
Starting application...
Streaming application logs...
Don't want timestamps and colours? You can use
--simple
to remove these.The configuration is uploaded to the cluster and the application is run. At this point, unless you passed the
--detached
flag, the run
command will automatically start streaming application logs back from the cluster.Metrics will be stored in: /tmp/hadeanos-1001/hadeanos-metrics, which isn't a tmpfs. Exporting metrics might cause performance problems.
dynamic backend - creating...
At this point the dynamic backend needs to provision a machine for the application. To do this, it must first create a resource group and storage account to store terraform state within. This is typically quite fast, but it can sometimes fail. If this does fail, see our Hints, Tips, and Troubleshooting guide for more information.
dynamic backend - creation successful
dynamic backend - getting manager
unsupported attribute in locus: platform
Next, the dynamic backend must scale up machines to run your application. In Azure, machine scale up typically happens in less than 5 minutes. Note that if you run repeatedly and you have your
standby_machines
set to greater than zero, you won't see this start up cost on every run. See Configuring your application for more information on how your configuration impacts scaling.PLAY [Provision machine.] ******************************************************
TASK [Wait for SSH connection] *************************************************
ok: [51.140.4.108]
(...)
PLAY RECAP *********************************************************************
51.140.4.108 : ok=10 changed=9 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
Once a machine is available, ansible is used to provision the machine with the manager and user binary as described in Creating Hadean Platform applications.
dynamic backend - getting manager was successful
Reserving resources for 20.117.71.210.20000.0 (before: cores=2,memory=4028332 KiB; after: cores=31/16,memory=4020140 KiB; delta: cores=1/16,memory=8 MiB)
Switching logging to use the global logger on Manager Manager { pid: 20.117.71.210.20000.0, free_resources: Resources { cores: Ratio { numer: 31, denom: 16 }, memory: Bytes { bytes: 4116623360 } }, reserved_resources: Resources { cores: Ratio { numer: 1, denom: 16 }, memory: Bytes { bytes: 8388608 } }, total_resources: Resources { cores: Ratio { numer: 2, denom: 1 }, memory: Bytes { bytes: 4125011968 } }, sender: … }
The dynamic backend then connects to the manager and allocates resources for the application.
entry
parent start
Reserving resources for 20.117.71.210.20000.0 (before: cores=31/16,memory=4020140 KiB; after: cores=15/8,memory=4011948 KiB; delta: cores=1/16,memory=8 MiB)
entry
child start
child received: parent
parent received: child
parent end
child end
Releasing resources for 20.117.71.210.20000.0 (before: cores=15/8,memory=4011948 KiB; after: cores=31/16,memory=4020140 KiB; delta: cores=1/16,memory=8 MiB)
The Hadean global logger preserves module names in the original JSON format of the logs. If you want to filter out the hadean logging and just focus on your application's logs, you can use
--simple
and --no-parse-logs
to get out the raw log format. Then, you can use a tool like jq to process the JSON output and search for your module name.At this point you'll see the logs from your application. In this case, we've run the hello example. So we can see the output from that program coming from the cluster.
Termination request received from scheduler
Killing all children
Wait for children to finish
dynamic backend - started destruction of machine (ip: 20.117.71.210)
dynamic backend - machine destruction was successful (identifier: 0, ip: 20.117.71.210)
all managers have been terminated
After the application exits, if your application used more resources than is specified in your
standby_machines
in your application config, the scheduler will down-scale your resources that are not in use. After this down scaling, exactly as many VMs as is specified in your config will remain.After doing a
run
the time configured in machines_timeout
will be used to destroy all standby machines if no new application is run within that time. This allows you to iterate rapidly without leaving resources around.The scheduler will automatically scale down resources when applications are not running (to the minimum specified in standby-machines in your config), so you only need to clean up resources manually when you are finished with the scheduler itself. You can configure how long the machine timeout is before they are cleaned up just as you can configure how many machines are held in standby.
If you have changed your configuration, you may find standby machines of a different configuration continue to be used. In this situation, you can use the
clean-up
command to fully remove all dynamic resources and start fresh.hadean cluster --name <name> clean-up
You'll have to stop any running application with the
stop
command first, if something is running. You can always check using the status
command if you aren't sure. This command does not remove the scheduler itself, just all the resources that the scheduler has created.If you want to remove the whole cluster; dynamic resources and scheduler, use the
destroy
command:hadean cluster --name <name> destroy
This will:
- 1.Ask you to confirm that you do in fact want to destroy the named cluster.
- 2.Stop any running applications (this can take up to 5 minutes if the application is currently scaling up or down).
- 3.Clean up all dynamic resources.
- 4.Remove the scheduler resources.
- 5.Remove the cluster from the cluster list.
This will remove all virtual machines, elastic IPs, storage, and everything else associated with a cluster. There is no way to restore a cluster.
For self-managed clusters, the destroy command will not remove the
hadean
user, nix
directory, and two temporary directories. Furthermore it will not destroy any cloud resources that you may happen to be using. The machines will otherwise be reverted to their original state, ready for re-deployment if desired.Currently, the following resources will remain in your cloud provider after destroy has been run:
- 1.The dynamic resources resource group. This will be named
hadean-dynamic-{name}-{key}-resource-group
in all providers. Wherename
is the first 6 characters of the cluster name, andkey
is the first 6 characters of the cluster key. You can see this key by usinghadean cluster list
before destroying. - 2.The Azure Storage Account or AWS S3 bucket that contains the dynamic resources state. You can find this using the resource group above.
- 3.The Azure Storage Account or AWS S3 bucket that contains the cluster resources state.
- 1.In AWS, the S3 bucket will be called
hadean-tf-{region}-{name}-{key}
with the same naming rules as the resource group in (1). - 2.In Azure, the resource group containing the storage account will be called
hadean-tf
. The storage account name will be a hash. The storage container will have the namehadean-tf-{region}-{name}-{key}
.
These remaining resources have minimal associated costs but should be cleaned up manually or via a scripted clean up process. This will help to avoid hitting quotas on your resources such as maximum allocated S3 buckets.