Cloud Provisioning

The default mode for taurus is to use local provisioning, which means all the tools will be started on local machine. This is not much scalable, so there is a way to delegate actual tool execution into BlazeMeter cloud. Even free accounts can execute cloud tests, according to BlazeMeter's free-tier plan.

It is done by setting cloud provisioning like this:

provisioning: cloud

To access BlazeMeter cloud, Taurus would require to have API key and secret set inside cloud module settings:

modules:
  cloud:
    token: '******:**************'  # API id and API secret divided by :
    timeout: 10s  # BlazeMeter API client timeout
    browser-open: start  # auto-open browser on test start/end/both/none
    check-interval: 5s  # interval which Taurus uses to query test status from BlazeMeter
    public-report: false  # make test report public, disabled by default
    send-report-email: false  # send report email once test is finished, disabled by default
    request-logging-limit: 10240 # use this to dump more of request/response data into logs, for debugging

All folders among your resource files (scripts) will be packed automatically before sending and unpacked on cloud workers with unpacker service.

Never put API key into your main config files!

Never post it to support forums!

It is recommended to place the token setting in your personal per-user config ~/.bzt-rc to prevent it from being logged and collected in artifacts.

Load Settings for Cloud

By default, cloud-provisioned execution will read concurrency and throughput options normally. There's a notation that allows configuring values for local and cloud at once, to remove the need to edit load settings when switching provisioning during test debugging from local to cloud and back:

execution:
- scenario: my-scen
  concurrency:
    local: 5
    cloud: 5000
  throughput:
    local: 100
    cloud: 10000

Then you can just switch provisioning and load settings will be taken accordingly. For example, running bzt config.yml -o provisioning=cloud is an easy way to toggle on cloud provisioning. Short form bzt config.yml -cloud is available and contrariwise you can turn off cloud provisioning by the similar way: bzt config.yml -local The concurrency and througput are always total value for execution, no matter how many locations will be involved.

Modules Settings

There are some rules of sending test config to cloud machines. Taurus makes cleanup and removes unused modules and user-specific classes (as there aren't such classes in Cloud by default) To suppress this behaviour you can use send-to-blazemeter parameter.

execution:
- executor: jmeter
  iterations: 10
  files:
  - my_own.py
  scenario:
    requests:
    - http://blazedemo.com
modules:
  jmeter:
    send-to-blazemeter: true    # keep class value for jmeter module
    class: my_own.JMeterClass
    path: /path/to/local/jmeter.sh
  unused_mod:                   # will be removed
    class: some.class.Name

Specifying Account, Workspace and Project

Accounts, Workspaces and Projects are BlazeMeter's features that help to exactly specify the access rights and support shared access to tests and BM features. You can learn more about Workspaces and Projects from BlazeMeter docs, e.g. an article Workspaces and Projects.

With Taurus, it is possible to specify both names and identifiers for all entities listed.

Example:

execution:
- scenario: my-scenario

scenarios:
  my-scenario:
    requests:
    - http://blazedemo.com/
    
modules:
  cloud:
    account: My Account  # numeric identifier can also be specified
    workspace: Shared Workspace
    project: Taurus tests
    test: Example test

If the test can be resolved (when account, workspace, project and test do exist) — then the test will be updated with provided Taurus configuration and then the test will be launched.

If the cloud test doesn't exist — it will be created and launched.

By default, Taurus will use the default user's account and a default workspace, so it's not required to specify account, workspace and project every time.

There's also a useful shortcut that allows to specify all parameters at once by using a link to existing BlazeMeter test:

execution:
- scenario: my-scenario

scenarios:
  my-scenario:
    requests:
    - http://blazedemo.com/
    
modules:
  cloud:
    test: https://a.blazemeter.com/app/#/accounts/99/workspaces/999/projects/9999/tests/99999

Launching Existing Cloud Tests

Taurus provides a way to launch pre-configured cloud tests by their name or id. This is the default behaviour of cloud provisioning when the execution section is empty.

This configuration will launch the cloud test named "Taurus Test" and await for its execution:

provisioning: cloud

modules:
  cloud:
    test: Taurus Test
    launch-existing-test: true  # you can omit this field if your `execution` section is empty

Just like in the previous section, it is possible to specify account, workspace and other fields and to use identifiers.

It's also possible to use the link to the test to launch it:

provisioning: cloud

modules:
  cloud:
    test: https://a.blazemeter.com/app/#/accounts/99/workspaces/999/projects/9999/tests/99999

It also makes it possible to launch a cloud test with a single command line command:

$ bzt -cloud -o modules.cloud.test=https://a.blazemeter.com/app/#/accounts/97961/workspaces/89846/projects/132315/tests/5817816

Detach Mode

You can start Cloud test and stop Taurus without awaiting test results with the detach attribute:

modules:
  cloud:
    token: '******'    
    detach: true  # launch cloud test and immediately exit    

or use appropriate alias for this: bzt config.yml -cloud -detach

Configuring Cloud Locations

Cloud locations are specified per-execution. Specifying multiple cloud locations for execution means that its concurrency and/or throughput will be distributed among the locations. Locations is the map of location id's and their relative weights. Relative weight determines how much value from concurrency and throughput will be put into corresponding location.

execution:
- locations:
    us-west-1: 1
    us-east-1: 2

If no locations specified for cloud execution, default value from modules.cloud.default-location is taken with weight of 1. To get the list of all available locations, run bzt -locations -o modules.cloud.token=<API Key>. The list of available locations is taken from User API Call and may be specific for particular user. See locations block and id option for each location.

By default, Taurus will calculate machines count for each location based on their limits obtained from User API Call. To switch to manual machines count just set option locations-weighted into false. Exact numbers of machines for each location will be used in that case:

execution:
- locations:
    us-west-1: 2
    us-east-1: 7
  locations-weighted: false
execution: 
- scenario: dummy 
  concurrency:
    local: 5
    cloud: 1000
  ramp-up: 10s
  hold-for: 5m
  locations: 
    eu-central-1: 1
    eu-west-1: 1
    us-east-1: 1
    us-west-1: 1
    us-west-2: 1
provisioning: cloud

scenarios:
  dummy:
    script: Dummy.jmx    

Reporting Settings

modules:
  cloud:
    test: Taurus Test  # test name
    report-name: full report    # name of report
    project: Project Name  # project name or id

Deleting Old Test Files

By default, Taurus will delete all test files from the cloud before uploading any new ones. You can disable this behaviour by setting delete-test-files module setting to false.

Example:

modules:
  cloud:
    delete-test-files: false

Specifying Additional Resource Files

If you need some additional files as part of your test and Taurus fails to detect them automatically, you can attach them to execution using files section:

execution:
- locations:
    us-east-1: 1
  scenario: test_sample    
  files:
  - path/to/file1.csv
  - path/to/file2.csv
  
scenarios:
  test_sample:
    script: testplan.jmx  

Specifying Where to Run for Shellexec Service

In shellexec service, the run-at parameter allows setting where commands will be executed. Surprisingly, local means the cloud worker will execute it, cloud means the controlling CLI will execute it.

Using Separate Pass/Fail Criteria for Cloud

If you want to use separate pass/fail criteria for cloud execution vs local execution, use run-at parameter to distinguish. For example:

reporting:
- module: passfail
  run-at: cloud
  criteria:
  - avg-rt>100ms
  
- module: passfail
  run-at: local
  criteria:
  - avg-rt>5s

Installing Python Package Dependencies

If you need to install additional python modules via pip, you can do it by using shellexec service and running pip install <package> command at prepare stage:

services:
- module: shellexec
  prepare: 
  - pip install cryptography  # 'cryptography' is the library from PyPi

You can even upload your proprietary python eggs into workers by specifying them in files option and then installing by shellexec:

execution:
- executor: locust
  scenario: locust-scen
  files:
  - my-modules.zip
        
services:
- module: shellexec
  prepare: 
  - unzip my-modules.zip
  - pip install -r requirements.txt

Enabling Dedicated IPs Feature

When your account in BlazeMeter allows you to use "Dedicated IPs" feature, you can enable it by setting in config file:

modules:
  blazemeter:
    dedicated-ips: true

Worker Number Info

There is a way to obtain worker index which can be used to coordinate distributed test data. For example, you can make sure that different workers will use different user logins or CSV file parts. To achieve that, you get some env variables for shellexec modules and some properties for jmeter module:

  • TAURUS_INDEX_ALL - absolute worker index in test
  • TAURUS_INDEX_EXECUTION - per-execution worker index
  • TAURUS_INDEX_LOCATION - per-location worker index

Cloud Execution Notes

Please note that for cloud provisioning actual Taurus execution will be done on remote machines, so:

  • the test will not run if your account has no enough engines allowed
  • if you don't specify any duration for test with hold-for and ramp-up options, some default duration limit will be used
  • you should not use -report commmand-line option or blazemeter reporter, all reports will be collected automatically by BlazeMeter
  • only following config sections are passed into cloud: scenarios, execution, services
  • shellexec module has artifacts-dir set as default-cwd
  • cloud workers execute Taurus under isolated virtualenv