Additional Location Configuration Options

Inheritance and Named Locations

Named locations can be defined for commonly used groups of properties, with the syntax brooklyn.location.named.your-group-name. followed by the relevant properties. These can be accessed at runtime using the syntax named:your-group-name as the deployment location.

Some illustrative examples using named locations and showing the syntax and properties above are as follows:

# Production pool of machines for my application (deploy to named:prod1)

# AWS using my company's credentials and image standard, then labelling images so others know they're mine,owner="Bob Johnson"

brooklyn.location.named.AWS\ Virginia\ Large\ Centos = jclouds:aws-ec2
brooklyn.location.named.AWS\ Virginia\ Large\ Centos.region = us-east-1
brooklyn.location.named.AWS\ Virginia\ Large\ Centos.imageId=us-east-1/ami-7d7bfc14
brooklyn.location.named.AWS\ Virginia\ Large\ Centos.user=root
brooklyn.location.named.AWS\ Virginia\ Large\ Centos.minRam=4096

Named locations can refer to other named locations using named:xxx as their value. These will inherit the configuration and can override selected keys. Properties set in the namespace of the provider (e.g. will be inherited by everything which extends AWS Sub-prefix strings are also inherited up to brooklyn.location.*, except that they are filtered for single-word and other known keys (so that we exclude provider-scoped properties when looking at sub-prefix keys). The precedence for configuration defined at different levels is that the value defined in the most specific context will apply.

This is rather straightforward and powerful to use, although it sounds rather more complicated than it is! The examples below should make it clear. You could use the following to install a public key on all provisioned machines, an additional public key in all AWS machines, and no extra public key in prod1:

brooklyn.location.extraSshPublicKeyUrls="[ \"\", \"\" ]"

And in the example below, a config key is repeatedly overridden. Deploying location: named:my-extended-aws will result in an aws-ec2 machine in us-west-1 (by inheritance) with VAL6 for KEY:


SSH Keys

SSH keys are one of the simplest and most secure ways to access remote servers. They consist of two parts:

  • A private key (e.g. id_rsa) which is known only to one party or group

  • A public key (e.g. which can be given to anyone and everyone, and which can be used to confirm that a party has a private key (or has signed a communication with the private key)

In this way, someone – such as you – can have a private key, and can install a public key on a remote machine (in an authorized_keys file) for secure automated access. Commands such as ssh (and AMP) can log in without revealing the private key to the remote machine, the remote machine can confirm it is you accessing it (if no one else has the private key), and no one snooping on the network can decrypt of any of the traffic.

Creating an SSH Key

If you don’t have an SSH key, create one with:

$ ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa

Localhost Setup

If you want to deploy to localhost, ensure that you have a public and private key, and that your key is authorized for ssh access:

# _Appends_ to authorized_keys. Other keys are unaffected.
$ cat ~/.ssh/ >> ~/.ssh/authorized_keys

Now verify that your setup by running the command: ssh localhost echo hello world

If your setup is correct, you should see hello world printed back at you.

On the first connection, you may see a message similar to this:

The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 7b:e3:8e:c6:5b:2a:05:a1:7c:8a:cf:d1:6a:83:c2:ad.
Are you sure you want to continue connecting (yes/no)?

Simply answer ‘yes’ and then repeat the command again.

If this isn’t the case, see below.

Potential Problems

  • MacOS user? In addition to the above, enable “Remote Login” in “System Preferences > Sharing”.

  • Got a passphrase? Set brooklyn.location.localhost.privateKeyPassphrase as described here. If you’re not sure, or you don’t know what a passphrase is, you can test this by executing ssh-keygen -y. If it does not ask for a passphrase, then your key has no passphrase. If your key does have a passphrase, you can remove it by running ssh-keygen -p.

  • Check that you have an ~/.ssh/id_rsa file (or id_dsa) and a corresponding public key with a .pub extension; if not, create one as described above

  • ~/.ssh/ or files in that directory may have permissions they shouldn’t: they should be visible only to the user (apart from public keys), both on the source machine and the target machine. You can verify this with ls -l ~/.ssh/: lines should start with -rw------- or -r-------- (or -rwx------ for directories). If it does not, execute chmod go-rwx ~/.ssh ~/.ssh/*.

  • Sometimes machines are configured with different sets of support SSL/TLS versions and ciphers; if command-line ssh and scp work, but AMP/java does not, check the versions enabled in Java and on both servers.

  • Missing entropy: creating and using ssh keys requires randomness available on the servers, usually in /dev/random; see here for more information

Attaching Additional Volumes

AMP supports attaching additional volumes for AWS EC2, Azure ARM, vCloud Director and OpenStack. Note that it does not yet support OpenStack Mitaka.

Attaching additional volumes is possible both at provisioning time using a customizer and once the application is running, through the use of an effector.

Note: AMP users should design blueprints and extra disks scaling policies while taking in mind disks limitations for each cloud and OS. AWS EC2 Disks limitations:

Attach volumes during provisioning

To attach a disk on provisioning stage add the io.brooklyn.blockstore.brooklyn-blockstore:brooklyn.location.blockstore.NewVolumeCustomizer location customizer to the location configuration.

A YAML example is shown below:

    - $brooklyn:object:
        type: io.brooklyn.blockstore.brooklyn-blockstore:brooklyn.location.blockstore.NewVolumeCustomizer
          - blockDevice:
              sizeInGb: 3
              # NOTE: g can be any unused partition name e.g. `/dev/xvdg`
              deviceSuffix: 'g'
              deleteOnTermination: true
                brooklyn: br-example-val-test-1
              mountPoint: /mount/brooklyn/diskG
              filesystemType: ext4
          - blockDevice:
              sizeInGb: 3
              # NOTE: h can be any unused partition name e.g. `/dev/xvdh`
              deviceSuffix: 'h'
              deleteOnTermination: true
                brooklyn: br-example-val-test-1
              mountPoint: /mount/brooklyn/diskH
              filesystemType: ext4

You should specify a valid and available (this means that there shouldn’t be any existing partition already named with it) device suffix. For AWS the defined device name will be of type /dev/xvd*.

Important notice: The KVM is configured as the default hypervisor for OpenStack which means that the defined device name will be of type /dev/vd*.

A device of the size specified by sizeInGb will be created. This device will then be mounted by the OS at the path specified at mountPoint and formatted to the file system specified in filesystemType.

If deleteOnTermination is set to true, this device will be deleted on termination of the VM.

Attach volumes during runtime via an effector

To attach an effector for creating and adding an extra hdd, you should add this initializer to the yaml:

    - type: io.brooklyn.blockstore.brooklyn-blockstore:brooklyn.location.blockstore.effectors.ExtraHddBodyEffector

Use a map in the form of the JSON below as a parameter for the effector.

    "blockDevice": {
        "sizeInGb": 16,
        "deviceSuffix": "h",
        "deleteOnTermination": true,
        "tags": {
            "brooklyn": "br-example-test-1"
    "filesystem": {
        "mountPoint": "/mount/brooklyn/h",
        "filesystemType": "ext4"

Specialized Locations

Some additional location types are supported for specialized situations:

Single Host

The spec host, taking a string argument (the address) or a map (host, user, password, etc.), provides a convenient syntax when specifying a single host. For example:

location: host:(
- type: org.apache.brooklyn.entity.webapp.jboss.JBoss7Server

Or, in, set brooklyn.location.named.host1=host:(

The Multi Location

The spec multi allows multiple locations, specified as targets, to be combined and treated as one location.

Sequential Consumption

In its simplest form, this will use the first target location where possible, and will then switch to the second and subsequent locations when there are no machines available.

In the example below, it provisions the first node to, then it provisions into AWS us-east-1 region (because the bring-your-own-nodes region will have run out of nodes).

    - byon:(hosts=
    - jclouds:aws-ec2:us-east-1
- type:
    cluster.initial.size: 3
        type: org.apache.brooklyn.entity.machine.MachineEntity
Round-Robin Consumption and Availability Zones for Clustered Applications

A DynamicCluster can be configured to cycle through its deployment targets round-robin when provided with a location that supports the AvailabilityZoneExtension – the multi location supports this extension.

The configuration option on DynamicCluster tells it to query the given location for AvailabilityZoneExtension support. If the location supports it, then the cluster will query for the list of availability zones (which in this case is simply the list of targets) and deploy to them round-robin.

In the example below, the cluster will request VMs round-robin across three different locations (in this case, the locations were already added to the catalog, or defined in

    - my-location-1
    - my-location-2
    - my-location-3
- type:
  brooklyn.config: true
    cluster.initial.size: 3
        type: org.apache.brooklyn.entity.machine.MachineEntity

Of course, clusters can also be deployed round-robin to real availability zones offered by cloud providers, as long as their locations support AvailabilityZoneExtension. Currently, only AWS EC2 locations support this feature.

In the example below, the cluster will request VMs round-robin across the availability zones provided by AWS EC2 in the “us-east-1” region.

location: jclouds:aws-ec2:us-east-1
- type:
  brooklyn.config: true
    cluster.initial.size: 3
        type: org.apache.brooklyn.entity.machine.MachineEntity

For more information about AWS EC2 availability zones, see this guide.

Custom alternatives to round-robin are also possible using the configuration option on DynamicCluster.

The Server Pool

The ServerPool (src)

entity type allows defining an entity which becomes available as a location.