Pass/Fail CriteriaEvery load test you run should have pass/fail status. There is a way to set this status in Taurus, based on runtime criteria. Special passfail module offers this functionality. Another useful feature of pass/fail module is auto-stop functionality, allowing to interrupt failed tests automatically, sparing the time and resources. Pass/fail criteria are specified as array of criteria, set through reporting item in config: reporting: - module: passfail criteria: - avg-rt of IndexPage>150ms for 10s, stop as failed - fail of CheckoutPage>50% for 10s, stop as failed The above example use short form for criteria, its general format is the following: subject of [label] condition threshold [logic] [timeframe], action as status Explanation:
Any non-required parameters might be omitted, the minimal form is subject [condition] threshold. Parameters in [square brackets] are optional. Possible subjects are:
Timeframe LogicIf no timeframe logic is specified, the pass/fail rule is processed at the very end of test, against total aggregate data. To apply checks in the middle of the test, please use one of possible timeframe logic options:
Custom Messages for CriteriaBy default, Taurus uses criteria string to present it in messages. If you want to change the message, you can do one of:
reporting: - module: passfail criteria: My Message: avg-rt of Sample Label>150ms for 10s, stop as failed Sample Label fails too much: fail of Sample Label>50% for 10s, stop as failed Full Form of the CriteriaIt is derived by Taurus automatically from the short form. The full form can contain colons (such as URLs) in the label. You can manually specify it as this: reporting: - module: passfail criteria: - subject: avg-rt # required label: 'Sample Label' # optional, default is '' condition: '>' # required threshold: 150ms # required logic: for # optional, logic to aggregate values within timeframe. # Default 'for' means take latest, # 'within' and 'over' means take sum/avg of all values within interval timeframe: 10s # optional, default is none stop: true # optional, default is true. false for nonstop testing until the end fail: true # optional, default is true Per-Executor CriteriaFailure criteria you add under reporting section is global. In case you use multiple execution items, aggregate data from all of them is used to trigger criteria. To attach criteria only to certain execution items or scenarios, use same syntax of criteria: execution: - scenario: s1 criteria: # this is per-execution criteria list - fail>0% - avg-rt>1s - scenario: s2 scenarios: s1: script: file1.jmx s2: script: file2.jmx criteria: # this is per-scenario criteria list - fail>50% - p90>250ms reporting: - module: passfail # this is to enable passfail module Monitoring-Based Failure CriteriaYou can use special failure criteria based on monitoring data from target servers. Most parameters for criteria are same like in other fail criteria. You'll have to use full format for metric specification because of the need to specify metric class bzt.modules.monitoring.MonitoringCriteria. For example, to stop test once local CPU is exhausted, use: reporting: - module: passfail criteria: - class: bzt.modules.monitoring.MonitoringCriteria subject: local/cpu condition: '>' threshold: 90 timeframe: 5s |
On this page:
Quick Links: |