You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

4.2 KiB

Nomad Cluster Setup


Before continuing with the setup for Nomad and Consul:

  • Provision DO infra with Terraform.
  • Run Ansible Playbook to boostrap the node.

Setup Tailscale

Tailscale acts as a mesh layer between the server and worker nodes. Since the user's laptop/mobile also has a Tailscale agent running it makes it easy to deploy and browse Nomad/Consul Admin UIs as well.

sudo tailscale up

Install Nomad

Follow the instructions from the docs.

curl -fsSL | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] $(lsb_release -cs) main"
sudo apt-get update && sudo apt-get install nomad
nomad -autocomplete-install
complete -C /usr/local/bin/nomad nomad
sudo mkdir --parents /opt/nomad

Setup Nomad

Follow the instructions from the docs.

Systemd unit

# /etc/systemd/system/nomad.service

ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/usr/bin/nomad agent -config /etc/nomad.d



All the config files are stored in /etc/nomad.d.

data_dir  = "/opt/nomad/data"
bind_addr = "{{ GetInterfaceIP \"tailscale0\" }}"
datacenter = "hydra"

server {
  enabled          = true
  bootstrap_expect = 1

client {
  enabled       = true
  options = {
    "docker.volumes.enabled" = true,
    "driver.raw_exec.enable" = "1"
  host_network "tailscale" {
    cidr = ""
    reserved_ports = "22"

consul {
  address = ""


Since we changed the bind_addr from to Tailscale IP we need to configure NOMAD_ADDRESS env variable for CLI to configure the remote endpoint:


export NOMAD_ADDR=

Before proceeding ahead, make sure nomad is running:

$ nomad server members
Name          Address    Port  Status  Leader  Protocol  Build  Datacenter  Region  4648  alive   true    2         1.0.3  hydra       global

Install Consul

sudo apt-get update && sudo apt-get install consul
consul -autocomplete-install
complete -C /usr/bin/consul consul
sudo mkdir --parents /opt/consul

Setup Consul

Generate Keys

consul keygen

The output of the above command is used in the config file.


All the config is stored in /etc/consul.d

datacenter = "hydra"
data_dir = "/opt/consul/data"
encrypt = "<TOKEN>"
server = true
bootstrap_expect = 1
client_addr = ""
bind_addr = ""
ui = true
connect {
  enabled = true



Currently using TF_VARS to load env variables from the host and run tf apply. Terraform then templates out the Nomad jobspec and submits the job to the server. This is okay in this context because:

  • Nomad API server is listening only to Tailscale IP. Which means only trusted, authenticated agents have access to the API. This is very important because Nomad shows the plain text version of the jobspec in UI and CLI. So all the secret keys can be exposed if a malicious actor has access to the API server (even if read only).

  • The env keys are mostly just one time API tokens or DB Passwords. They don't need to be "watched" and reloaded often, running an entire Vault server just for passing these keys seems a bit extra complexity.

However, to just experiment with things and make the setup a bit more secure, we can consider running a single node Vault server:

  • Setup Vault to store secrets
    • Vault init/unseal steps.
    • Add Policies and Role in Vault for a namespace
    • Configure Nomad to use Vault
    • Add an API token in Vault
    • Pass CF token to Caddyfile and retrieve from Vault with Consul Template