cloudsoft.io

Additional Topics

This does not attempt to be a complete reference for the TOSCA spec, but instead to cover the main points of TOSCA and focus on areas unique to the Cloudsoft implementation. These include extensions and additional features, places where the spec is ambiguous and the Cloudsoft implementation has necessarily made a specific interpretation, and where the Cloudsoft implementation does not yet attempt to interpret the spec. The latter two are generally reserved for lesser-used functionality, whereas the former – places where the Cloudsoft implementation offers extensions – for functionality that has been required but which is not yet formalized in the spec.

Properties, Attributes, and Capabilities

Properties are used by template authors to provide input values to TOSCA entities which indicate their “desired state” when they are instantiated. The value of a property can be retrieved using the get_property function within TOSCA Service Templates.

Attributes are used by template authors to expose the “actual state” of some property of a TOSCA entity after it has been deployed and instantiated (as set by the TOSCA orchestrator). In the AMP, TOSCA attributes are mapped to entity sensors.

Capabilities are used by template authors to define a named, typed set of data that can be associated with Node Type or Node Template to describe a transparent capability or feature of the software component the node describes.

Refining Property Definitions

Note This section contains a set of details from the official OASIS TOSCA specification important when designing property definition.

TOSCA allows derived types to refine properties defined in base types. A property definition in a derived type is considered a refinement when a property with the same name is already defined in one of the base types for that type.

Property definition refinements use parameter definition grammar rather than property definition grammar. Specifically, this means the following:

  • The type keyname is optional. If no type is specified, the property refinement reuses the type of the property it refines. If a type is specified, the type must be the same as the type of the refined property or it must derive from the type of the refined property.
  • Property definition refinements support the value keyname that specifies a fixed type-compatible value to assign to the property. These value assignments are considered final, meaning that it is not valid to change the property value later (e.g. using further refinements).

Property refinement definitions can refine properties defined in one of base types by doing one or more of the following:

  • Assigning a new (compatible) type as per the rules outlined above.
  • Assigning a (final) fixed value
  • Adding a default value
  • Changing a default value
  • Adding constraints.
  • Turning an optional property into a required property.

No other refinements are allowed.

Types

Properties and attributes are typed in TOSCA, with the usual standard types – string, integer, float, double, boolean – all supported, along with timestamp, map, and list, and any type defined as a data_type or registered with Cloudsoft AMP as a bean.

The entry_schema keyword from TOSCA is not supported, but equivalent functionality can be achieved where needed by giving generic arguments where appropriate, e.g. map<string,boolean>.

Constraints

It is supported for property definitions to indicate a property is required. The following constraints are defined in the spec:

  • equal
  • greater_than
  • greater_or_equal
  • less_than
  • less_or_equal
  • in_range
  • valid_values
  • length
  • min_length
  • max_length
  • pattern
  • schema

All but schema are supported in AMP.

TOSCA and Cloudsoft AMP DSL

The following table lists the TOSCA DSL functions and how they are supported in the Cloudsoft implementation.

TOSCA Section TOSCA DSL function AMP DSL/Implementation Comment
concat 4.3.1 $brooklyn:formatString  
join 4.3.2 brooklyn.tosca.ToscaDslFunctions.join  
token 4.3.3 brooklyn.tosca.ToscaDslFunctions.token  
get_input 4.4.1 $brooklyn:scopeRoot().config  
get_property 4.4.2 $brooklyn:config on SELF, $brooklyn:component on named node , $brooklyn:object on TARGET,SOURCE or HOST  
get_attribute 4.5.1 $brooklyn:attributeWhenReady on SELF, $brooklyn:component on named node, $brooklyn:object TARGET,SOURCE or HOST  
get_operation_output 4.6.1 $brooklyn:attributeWhenReady on SELF, $brooklyn:component on named node, $brooklyn:object TARGET or SOURCE  
get_node_of_type 4.7.1 UNSUPPORTED no use case identified, but could be supported in the future
get_artifact 4.8.1 UNSUPPORTED no use case identified, but could be supported in the future

AMP DSL is used in TOSCA blueprints when AMP components such as initializers/enrichers or policies are used.

Extra features

Native TOSCA does not define a way to attach an icon(logo) to a type or template. Cloudsoft AMP supports declaring an icon element for any data, node, interface, relationship, policy type. E.g.:

tosca_definitions_version: tosca_simple_yaml_1_3

metadata:
  template_author: Cloudsoft
  template_name: testNodeWithIcon
  template_version: 0.1.0-SNAPSHOT 

node_types:
  my-webserver-type:
    derived_from: tosca.nodes.WebServer
    metadata:
      icon: classpath://node-logo.png

To keep the implementation fully compliant with the TOSCA specification the icon and template_icon are declared as metadata. Cloudsoft AMP supports declaring template_icon in the metadata of a service template. The icon attached to a service template is applied to all types declared in the document, unless the type declares its own icon using the icon element.

The accepted value for any icon configuration element is any URL pointing to an image of a decent size. The images will be displayed in the CloudSoft AMP catalog resized to 80x44px.

In-Life Management: Attributes, Enrichers, Policies and Workflow

Checking Whether a Service is Running

The Cloudsoft AMP implementation of TOSCA strong recommends that service monitoring be configured, as an extension of TOSCA. This is done with the check_running operation on the brooklyn.tosca.interfaces.node.lifecycle.CheckRunning interface, which will be periodically invoked and should check whether the service is running.

If the service is not running, this operation should either fail (e.g. for a script, by returning a non-zero exit code) or return false as the value for the IS_RUNNING output.

If defined, this operation will be invoked with frequency as determined by the period input (a time interval, defaulting to 1m) whenever the service is expected to be running. It will also be invoked with periodic back-off after a successful startup, before the service is confirmed as started.

Adding Additional Monitoring Sensors

The check_running operation can set other values, such as to provide monitoring information. This can be linked to first-class TOSCA attributes (AMP sensors) in the usual TOSCA way, as shown in the example below.

Example

This example illustrates the use of check-running, being false, then true, then false, and including sensors.

We have a main start script, sleep_bg.sh:

chmod +x ./sleep_bg-body.sh
nohup ./sleep_bg-body.sh > sleep.stdout 2> sleep.stderr &
echo $! > /tmp/sleep.${id}.pid

A script for the nohupped, backgrounded process:

sleep_bg_body.sh:

# exit when any command fails
set -e

echo pre-sleep ${sleep_duration_0}

sleep ${sleep_duration_0}

date > /tmp/sleep.${id}.big_sleep_active
echo did pre sleep and now doing big sleep

sleep ${sleep_duration_1}

rm /tmp/sleep.${id}.big_sleep_active

And our check-running script, sleep_bg-check-running.sh:

if ps -p $(cat /tmp/sleep.${id}.pid) && ls /tmp/sleep.${id}.big_sleep_active ; then

  echo our big sleep is running, returning a demo URL
  export MAIN_URI=http://$(hostname)/${id}/live
  export IS_RUNNING=true

else

  echo either not running yet or it ended

  # non-zero exit code could also be used, often gives a simpler script!
  export IS_RUNNING=false

fi

With all scripts in scripts/echos, and the following TOSCA YAML at root:

tosca_definitions_version: tosca_simple_yaml_1_3

metadata:
  template_name: demoCheckRunningFalseThenTrueThenFalse
  template_version: 1.0.0-SNAPSHOT

topology_template:
  node_templates:
    a_node:
      type: tosca.nodes.SoftwareComponent
      attributes:
        main.uri: { get_attribute: [ SELF, brooklyn.tosca.interfaces.node.lifecycle.CheckRunning.check_running.MAIN_URI ] }
      interfaces:
        Standard:
          start:
            inputs:
              sleep_duration_0: 2
              sleep_duration_1: 10
              id: { get_attribute: [ SELF, entity.id ] }
            implementation:
              primary: classpath://scripts/echos/sleep_bg.sh
              dependencies:
              - classpath://scripts/echos/sleep_bg-body.sh
        CheckRunning:
          inputs:
            period: 1s
            id: { get_attribute: [ SELF, entity.id ] }
          operations:
            check_running: classpath://scripts/echos/sleep_bg-check-running.sh

The service will not report start as completed until after the sleep_duration_0 of two seconds, at which point the process indicated in the pid file is active and the big_sleep_active lock file is written. It will be true with the main.uri attribute sensor published. Then after a further 10 seconds, check-running will detect that process indicated in the pid file is no longer active and will set the service to be failed.

This can all be observed in the GUI.

This teaching example deliberately shows some complex capabilities. In many cases a PID file is naturally created by the start process, in which is there is no need for a separate background script, and the check-running script can simply call ps -p ${cat /path/to/pid_file} and rely on its exit code. In other cases there is no need even for a pid used in TOSCA, if a simple command such as curl http://localhost/is_live or service foo status is sufficient to test liveness.

Policies and Workflow

TOSCA Specification mentions that custom policy types can be declared under policy_types node.(Section 3.7.12.2) A custom policy type is linked to a set of triggers. A trigger defines the event, condition (Section 3.6.25) and action that is used to “trigger” a policy it is associated with.(Section 3.6.22) An action is a list of activity definitions. And activity definition is a workflow + inputs. (Section 3.6.23) A workflow is a list of explicit calls to operations declared in interfaces of a target node.

Supporting custom types requires: triggers, conditions, activities, workflows and probably notifications - required for interface operation results . (Section 3.7.5.2 )

These concepts are not all defined thoroughly in the present version of the TOSCA spec, and so support within this software is correspondingly formative. Policies can be defined and added to the catalog, but their behavior must be implemented out-of-band in a Java class, such as an Cloudsoft AMP policy.

TOSCA Workflow and TOSCA Normative Policies

The TOSCA Specifications declared the following Normative Policies:

  • tosca.policies.Root - The TOSCA Policy Type all other TOSCA Policy Types derive from.
  • tosca.policies.Placement - The TOSCA Policy Type definition that is used to govern placement of TOSCA nodes or groups of nodes.
  • tosca.policies.Scaling - The TOSCA Policy Type definition that is used to govern scaling of TOSCA nodes or groups of nodes.
  • tosca.policies.Update - The TOSCA Policy Type definition that is used to govern update of TOSCA nodes or groups of nodes.
  • tosca.policies.Performance - The TOSCA Policy Type definition that is used to declare performance requirements for TOSCA nodes or groups of nodes.

The precise behavior of these policies is not defined in the spec, so at present the majority of these types are abstract. At present, only tosca.policies.Scaling is implemented, as described below.

Cloudsoft AMP and Cloudsoft AMP Policies

Existing AMP and AMP policies (and other components) can be used in TOSCA blueprints by attaching them to a group of nodes using the add_brooklyn_types group.

groups:
- add_brooklyn_types:
   members: [ tomcat-cluster ]
   type: brooklyn.tosca.groups.initializer
    properties:
      brooklyn.policies:
      - type: org.apache.brooklyn.policy.autoscaling.AutoScalerPolicy  # AMP policy type
        brooklyn.config:  # AMP configuration syntax
          metric: cpu.perNode
          metricLowerBound: 0.5
          metricUpperBound: 0.8
          resizeUpStabilizationDelay: 2s
          resizeDownStabilizationDelay: 1m
          minPoolSize: 2
          maxPoolSize: 5

The org.apache.brooklyn.policy.autoscaling.AutoScalerPolicy is an AMP policy type that provides scaling behaviour, similar to what a custom implementation of tosca.policies.Scaling should provide. In Cloudsoft AMP implementation, this type was wrapped in the tosca.policies.Scaling type and can be used as a TOSCA policy in TOSCA blueprints. The list of AMP configuration items become TOSCA property assignments, as depicted in the next YAML sample:

topology_template:
  node_templates:
    a_node:
      type: tosca.entity.DynamicCluster
      properties:
        cluster.initial.size: 2
        dynamiccluster.memberspec:
          $brooklyn:entitySpec:  # we need this so we can inject the requirement in every tomcat-instance
            name: Apache Tomcat Cluster
            type: io.cloudsoft.smart-tomcat9-node
            brooklyn.config:
              tomcat_deployment: { get_input: deployment }

  policies:
  - AutoScaling:
      type: tosca.policies.Scaling
      targets: [ a_node ]
      description: Sample Scaling Policy
      properties:
        metric: webapp.reqs.perSec.windowed.perNode
        metricLowerBound: 0.1
        metricUpperBound: 10
        minPoolSize: 1
        maxPoolSize: 4
        resizeUpStabilizationDelay: 10s
        resizeDownStabilizationDelay: 1m

The scaling behaviour provided by the org.apache.brooklyn.policy.autoscaling.AutoScalerPolicy(and its alias tosca.policies.Scaling) is implemented in Java and both are registered as AMP types. They apply to resizable entities of type tosca.entity.DynamicCluster or derivations of it. This type is an alias for the AMP org.apache.brooklyn.entity.group.DynamicCluster type. The two AMP configuration items cluster.initial.size and dynamiccluster.memberspec are treated as TOSCA Properties, but the value assigned to dynamiccluster.memberspec must be an AMP $brooklyn:entitySpec with an AMP configuration even if the type of the entity being scaled is a TOSCA template. (as the previous code snippet shows)

Locations

The target location environment for deployments can be specified in various different ways:

  • Add a tosca.default.location to the catalog: this is the simplest way, and applies to all TOSCA nodes which do not have a target environment already defined. It is appropriate when all or most deployments go to the same account, and in these cases there is no need to put any target environment (cloud or VMWare) connection details on individual blueprints. Where needed TOSCA capabilities can be used to further extend or customize the location.

  • Where a richer location model is required, such as to use something different to tosca.default.location for some blueprints, or to deploy different nodes / topologies to different sites as part of a single deployment, this can be done in one of a few ways:

    • Use a CAMP blueprint to wrap and refer to the TOSCA types, e.g. { services: [ { type: the-topology-template } ], location: my-other-environment } (or putting the location on different types)

    • Specify a group called add_brooklyn_types of type brooklyn.tosca.groups.initializer and properties including a location containing the Cloudsoft AMP location definition or registered type name; for example if foo is defined as a location, one could write: ``` groups:
      • add_brooklyn_types: members: [templateName,specific_node] type: brooklyn.tosca.groups.initializer properties: location: foo ```
    • A final way, if CAMP and TOSCA are being mixed, is that a node (e.g. Software Component) that is not hosted_on a Compute node will use locations of their ancestors; some, such as Software Components, require a MachineLocation with an IP address (distinct in the platform from a MachineProvisioningLocation such as AWS or VMWare or localhost), but this can be created by having an ancestor node such as EmptySoftwareProcess which creates a MachineLocation from a MachineProvisioningLocation

Packaging: CSARs, BOMs, and External Artifacts

How do we deal with CSARs; link to BOMs; use of OSGi and dependency resolution.

Use of internal v external artifacts, file syntax v classpath url, v external urls; resolved by platform v resolved on box

The platform distinguishes between adding items to catalog and deploying items in the catalog. In non-TOSCA, the platform supports:

  • deploying a CAMP YAML blueprint (YAML with a services block)
  • adding a BOM YAML file to the catalog (YAML with a brooklyn.catalog block)
  • adding a BOM bundle file to the catalog (ZIP with a catalog.bom, optionally also OSGi metadata)

The TOSCA spec is silent on the questions of how servers should maintain CSARs and how users should be able to trigger deployments. We have chosen currently to support, for TOSCA:

  • adding a CSAR to the catalog:
    • all types are added, and the topology template is also added; as per CSAR spec this must have either a TOSCA-Metadata/TOSCA.meta file or a single *.yaml or *.yml file in the root; as with adding a BOM bundle, this can include OSGi metadata, and the types in this CSAR can be referred to subsequently from blueprints (either TOSCA or CAMP); the topology template, if present, can be referred to using the name of the bundle
  • deploying a TOSCA YAML file which contains only a subset of TOSCA which is deployable without editing the catalog, see Deployable TOSCA
  • deploying a CAMP YAML referring to the topology template or to node types (e.g. services: [ { type: the-topology-template } ])
  • selecting a topology template or node type in the GUI and deploying it

Thus, currently for any TOSCA which relies on types or artifacts, it is necessary to follow a two-step process:

  • add the CSAR to the catalog
  • then deploy either using the GUI or deploying a simple TOSCA topology template or CAMP YAML as above, referring to the topology or node types from the CSAR

It is under consideration whether in future we support direct deployment of more sophisticated TOSCA resources:

  • allowing deploying a topology template with node types, where the node types are implicitly added to a catalog; this might not be very useful because there is no scope to refer to co-bundled artifacts, as there is no ZIP, but it might be useful if types do not need artifacts, or they are pulled from external URLs or other bundles. (not currently supported)
  • allowing deploying a CSAR, with the idea that the CSAR is implicitly added to a catalog as part of the deployment, and then the topology template within it is deployed; this is possible, but creates a risk that people accidentally deploy items when they intend only to add to catalog (not currently supported)

Deployable TOSCA

TOSCA service templates can be deployed in the Composer UI or using br deploy from the CLI. These service templates must be subset of TOSCA which does not declare any new types (because the “deploy” action does not currently edit the catalog), i.e. the only substantive definition is the topology_template. In addition, the description (single or multi-line) is expected in one place only, either at the root level or under topology_template. Having description in both places will be treated as a duplication error.

Requirement Matching, Capabilities, and Relationships

When a node ID is referenced, such as the host for a software component, this implementation of TOSCA looks first at all node IDs in the topology template defining the node requesting a node ID. If not found there, it will look within topology templates referenced as node templates (really Cloudsoft AMP children at runtime) in that topology, and in a parent topology template if the requesting node is in a topology being used in another topology. It will then continue that pattern looking at runtime parent/children topology templates until it has looked at all nodes. In this way, the same ID can be safely used in different topology templates with clear, deterministic behaviour at runtime. Note that if multiple nodes matching the same ID are found at the same “distance” (in terms of topology template parent/children traversals) when a unique one is required (such as for a node requirement), an error is raised.

TOSCA does not say what happens if a requirement from a supertype is re-declared in a type. We interpret this as intersecting the requirements, that is all must apply. Initial version does not support abstract requirements. (Section 2.9 of the Oasis TOSCA spec)

Note that when a topology template is used as a node type, it is not supported to declare requirements on that node type.

Depends On, Happens-Before Semantics, and Concurrency

The TOSCA spec makes some non-normative observations about the behaviour of DependsOn and HostedOn in sections 7.2.2 and 7.2.3, outside the definitions of those relationships (and in the unrelated section on workflow, using non-normative language i.e. “will” instead of “shall”), specifically claiming that if A depends on B, A will not be created until after B is started, and that if A and B are hosted on the same node C, there will be no concurrency among any of the (on-box) operations.

These comments obstruct many common use cases and encourage inefficient and brittle design: in design patterns such as 12-factor and micro-services, dependents should retry if their dependency is not available rather than rely on the dependency starting first, and in many cases especially with micro-services items are inter-dependent, so it is not possible to define a canonical start order. If the observations in section 7.2.2 are implemented, it will be impossible to deploy interdependent microservices.

For these reasons, those observations are not implemented. DependsOn by itself does not imply any “happens-before” relationship, and scripts can run concurrently.

This allows the most efficient deployment, as everything runs in parallel, unless there is a reason and an explicit instruction in the blueprint for that not to be the case.

This implementation of TOSCA also provides very simple, natural ways for parallelism to be blocked, based on promise theory. A call to get_attribute, when it is needed to be resolved, will block on that attribute being “ready”: non-null, non-blank, non-zero, and non-false.

Where a “happens-before” relationship is required, it is straightforward to implement and should be done explicitly on the relationship.

In most cases, if B needs to happen before A, it is because B publishes an attribute, call it token, which A will consume. The TOSCA design pattern we recommend is that any operation of A which depends on B should declare as one of its inputs something with the value get_attribute: [ B, token ]. This will cause that operation on A to block until token is published by B.

In rare cases where A needs B to start, but does not need any information from B, it can use the above attribute-inputs pattern with the brooklyn.tosca.artifacts.Implementation.NoOp artifact.

Where scripts must not be run in parallel, such as multiple nodes running apt as part of an installation, standard scripting techniques for mutual exclusion should be used. The GNU library parallel (aka sem) is recommended for establishing semaphores, or for a more lightweight method on Linux use flock or an atomic filesystem operation such as mkdir.

Inspector Organization

The Inspector UI module allows switching the tree structure view from the default management parent/child relationship (indicated by nested topology templates or grouping, e.g. with a Collection, or using brooklyn.children) to prefer TOSCA relationships such as host_for/hosted_on.

When in a relationship-oriented view such as host_for/hosted_on, those nodes that have such a relationship (based on a corresponding design-time requirement) are shown with that orientation, e.g. software components will be grouped underneath their compute host, irrespective of their management parent/child hierarchy.

Entities which are not a target of such a relationship – because these need to be placed in the tree – will remain under their default management parent (or nearest such ancestor in those special cases where the parent is already moved to be underneath the entity in question due to the selected relationship orientation).

Miscellaneous Interpretation and Variation from Spec

TOSCA Node Lifecycle to AMP Entity Lifecycle Mapping

AMP Entity Status values are declared in enum in org.apache.brooklyn.core.entity.lifecycle.Lifecycle.

TOSCA NODE Status AMP Entity Status TOSCA Interface Standard Operations
initial   create
creating    
created CREATED configure
configuring    
configured STOPPED start
starting STARTING  
started RUNNING stop
stopping STOPPING  
configured STOPPED delete
deleting    
deleted (abstract) DESTROYED  
error ON_FIRE  

TOSCA Attributes, Property and Parameter definitions

Keyname Attribute Def. (ToscaAttributeDefinitionOrAssignment) Property Def. (ToscaPropertyDefinitionOrAssignment) Parameter Def. (ToscaParameterDefinitionOrAssignment)
type x x x
description x x x
required   x x
default x x x
value     x
status x x x
constraints   x x
key_schema x x x
entry_schema x x x
metadata   x