cloudsoft.io

Testing

AMP performance, load and longevity testing breaks down into three categories:

  • Testing specific functionality
  • Testing particular application types, as part of the AMP “acceptance tests”.
  • Testing in-the-wild.

Testing specific functionality

There are a range of performance tests against specific pieces of functionality. For example:

  • SshjToolPerformanceTest and SshMachineLocationPerformanceTest for SSH
  • EntityPersistencePerformanceTest for persistence / HA
  • EntityPerformanceTest for effector invocations, setting sensors, and event subscriptions.

AMP acceptance tests

Examples of AMP acceptance tests include:

  • brooklyn/qa/load/LoadTest.java which deploys a large number of applications.
  • brooklyn.qa.longevity.webcluster.WebClusterApp which deploys a cluster that cycles through a sinusoidal load pattern, causing repeated scaling out and scaling back.

Testing in the wild

The easiest way to script AMP tests with strong assertions is to write tests that run with AMP in-memory, allowing full access to the AMP ManagementContext. However, to make tests more realistic requires running AMP in the same mode as one would in production (i.e. stand-alone process with the same persistence / HA features configured).

Use of the REST API makes it simple to drive AMP, to test a range of scenarios. The following curl commands are a useful starting point for testing.

To deploy an application:

curl \
    --insecure \
    --user admin:password \
    -H "Content-Type: application/json" \
    --data-binary @myblueprint.yaml \
    https://$AMP_HOST:8443/v1/applications

To invoke an effector:

curl \
    -H "Content-Type: application/json" \
    -d '{ "desiredSize": 3 }' \
    https://$AMP_HOST:8443/v1/applications/${APP_ID}/entities/${ENTITY_ID}/effectors/resize\?timeout=0

Load testing considerations

The load on AMP is very dependent on the type of application being deployed and managed. Considerations include:

  • The way entities are deployed.
    • Executing bash over ssh is expensive: establishing connections is CPU intensive; it requires a thread per SSH command being executed; it is network intensive.
    • Using a tool such as Chef or Puppet can offload much of this work: AMP just requires to connect to the Chef Client or Puppet Master (i.e. fewer SSH connections required).
  • The way entities are monitored.
    • AMP polling entities directly does not scale to 100s or 1000s of entities. In particular, ssh polling is very expensive when done for many entities.
    • Using a monitor agent (e.g. collectd, ganglia or nagios), reporting back to a central server, and having AMP query that server directly is more efficient.
  • The sensors being reported.
    • Setting a sensor triggers event matching to notify the required listeners.
    • Enrichers respond to particular events to emit additional sensors (e.g. to publish the average load on a cluster, based on the members of that cluster).
  • The policies being used.
    • Each policy subscribes to changes in a range of sensors, and responds as necessary.
    • A policy that performs no actions will still consume a small amount of CPU; if actions are performed (e.g. auto-scaling a cluster, or replacing a failed entity), then the amount of work can be comparable to deploying a (subset of the) application.
  • Logging.
    • AMP by default logs to the console at INFO, and to the log file at DEBUG. If the amount of logging is too great, then selectively decreasing the log level for particular classes/packages could greatly reduce the load.

Recommended JVM settings include:

  • -verbose:gc, to quickly determine if memory usage is an issue.
  • -XX:MaxPermSize=256m. The default level can result in OutOfMemoryErrors when there are many applications.
  • -Xms and -Xmx set to the same value, with at least 512m. The amount of memory affects the number of entities that can be managed by a single AMP node. If many entities are required, then at least 1024m is recommended.

Testing particular scenarios

The desired configuration is not always available for testing. For example, there may be insufficient resources to run 100s of JBoss app-servers, or one may be experimenting with possible configurations such as use of an external monitoring tool that is not yet available.

It is then possible to simulate aspects of the behaviour, for performance and load testing purposes. The AMP usage/qa project includes entities for this purpose. There are entities for JBoss 7 app-server, MySQL, Nginx and a three-tier app using these entities. Each entity has configuration options for:

  • simulateEntity:
    • if true, no underlying entity will be started. Instead a sleep 100000 job will be run and monitored.
  • simulateExternalMonitoring:
    • if true, disables the default monitoring mechanism. Instead, a function will periodically execute to set the entity’s sensors (as though the values had been obtained from the external monitoring tool).
    • if false, then:
      • if simulateEntity is true it will execute comparable commands (e.g. execute a command of the same size over ssh or do a comparable number of HTTP GET requests).
      • If simulateEntity is false then normal monitoring will be done.
  • skipSshOnStart:
    • if true (and if simulateEntity is true), then no SSH commands will be executed at deploy-time. This is useful for speeding up load testing, to get to the desired number of entities.

Example yaml to deploy one of these entities is:

location: localhost
services:
- type: brooklyn.qa.load.SimulatedJBoss7ServerImpl
  brooklyn.config:
    simulateEntity: true
    simulateExternalMonitoring: true
    skipSshOnStart: false