Extension chaosreliably
¶
Version | 0.79.0 |
Repository | https://github.com/chaostoolkit-incubator/chaostoolkit-reliably |
Chaos Toolkit extension for Reliably.
Install¶
To be used from your experiment, this package must be installed in the Python environment where chaostoolkit already lives.
$ pip install chaostoolkit-reliably
Authentication¶
To use this package, you must create have registered with Reliably services.
Then you need to set some environment variables as secrets.
RELIABLY_TOKEN
: the token to authenticate against Reliably’s APIRELIABLY_HOST:
: the hostname to connect to, default toapp.reliably.com
{
"secrets": {
"reliably": {
"token": {
"type": "env",
"key": "RELIABLY_TOKEN"
},
"host": {
"type": "env",
"key": "RELIABLY_HOST",
"default": "app.reliably.com"
}
}
}
}
Usage¶
As Steady Steate Hypothesis or Method¶
This extensions offers a variety of probes and tolerances ready to be used in your steady-state blocks.
For instance:
{
"version": "1.0.0",
"title": "SLO error-count-3h / Error budget 10%",
"description": "Monitor the health of our demo service from our users perspective and ensure they have a high-quality experience",
"runtime": {
"hypothesis": {
"strategy": "after-method-only"
}
},
"steady-state-hypothesis": {
"title": "Compute SLO and validate its Error Budget with our target",
"probes": [
{
"type": "probe",
"name": "get-slo",
"tolerance": {
"type": "probe",
"name": "there-should-be-error-budget-left",
"provider": {
"type": "python",
"module": "chaosreliably.activities.slo.tolerances",
"func": "has_error_budget_left",
"arguments": {
"name": "cloudrun-service-availability"
}
}
},
"provider": {
"type": "python",
"module": "chaosreliably.activities.slo.probes",
"func": "compute_slo",
"arguments": {
"slo": {
"apiVersion": "sre.google.com/v2",
"kind": "ServiceLevelObjective",
"metadata": {
"name": "cloudrun-service-availability",
"labels": {
"service_name": "cloudrun",
"feature_name": "service",
"slo_name": "availability"
}
},
"spec": {
"description": "Availability of Cloud Run service",
"backend": "cloud_monitoring_mql",
"method": "good_bad_ratio",
"exporters": [
],
"service_level_indicator": {
"filter_good": "fetch cloud_run_revision | metric 'run.googleapis.com/request_count' | filter resource.project_id == '${CLOUDRUN_PROJECT_ID}' | filter resource.service_name == '${CLOUDRUN_SERVICE_NAME}' | filter metric.response_code_class == '2xx'",
"filter_valid": "fetch cloud_run_revision | metric 'run.googleapis.com/request_count' | filter resource.project_id == '${CLOUDRUN_PROJECT_ID}' | filter resource.service_name == '${CLOUDRUN_SERVICE_NAME}'"
},
"goal": 0.9
}
},
"config": {
"backends": {
"cloud_monitoring_mql": {
"project_id": "${STACKDRIVER_HOST_PROJECT_ID}"
}
},
"error_budget_policies": {
"default": {
"steps": [
{
"name": "3 hours",
"burn_rate_threshold": 9,
"alert": false,
"window": 10800,
"message_alert": "Page the SRE team to defend the SLO",
"message_ok": "Last 3 hours on track"
}
]
}
}
}
}
}
}
]
},
"method": [
{
"name": "inject-traffic-into-endpoint",
"type": "action",
"background": true,
"provider": {
"func": "inject_gradual_traffic_into_endpoint",
"type": "python",
"module": "chaosreliably.activities.load.actions",
"arguments": {
"endpoint": "${ENDPOINT}",
"step_duration": 30,
"test_duration": 300,
"step_additional_vu": 3,
"vu_per_second_rate": 1,
"results_json_filepath": "./load-test-results.json"
}
}
}
]
}
This above example will get the last 5 Objective Results for our Must be good
SLO and determine if they were all okay or whether we’ve spent our error budget they are allowed.
As controls¶
You can use controls provided by chaostoolkit-reliably
to track your experiments within Reliably. The block is inserted automatically by Reliably when you import the experiment into Reliably.
Contribute¶
From a code perspective, if you wish to contribute, you will need to run a Python 3.6+ environment. Please, fork this project, write unit tests to cover the proposed changes, implement the changes, ensure they meet the formatting standards set out by black
, ruff
, isort
, and mypy
, add an entry into CHANGELOG.md
, and then raise a PR to the repository for review
Please refer to the formatting section for more information on the formatting standards.
The Chaos Toolkit projects require all contributors must sign a Developer Certificate of Origin on each commit they would like to merge into the master branch of the repository. Please, make sure you can abide by the rules of the DCO before submitting a PR.
Develop¶
If you wish to develop on this project, make sure to install the development dependencies. First you will need to install globally pdm and create a virtual environment:
$ pdm create venv
$ pdm use
$ $(pdm venv activate)
Then install the dependencies:
$ pdm sync -d
Test¶
To run the tests for the project execute the following:
$ pdm run test
Formatting and Linting¶
We use a combination of black
, [ruff
][flake8], isort
, mypy
and [bandit
][] to both lint and format this repositories code.
Before raising a Pull Request, we recommend you run formatting against your code with:
$ pmd run format
This will automatically format any code that doesn’t adhere to the formatting standards.
As some things are not picked up by the formatting, we also recommend you run:
$ pdm run lint
To ensure that any unused import statements/strings that are too long, etc. are also picked up. It will also provide you with any errors mypy
picks up.
Exported Controls¶
autopause¶
This module exports controls covering the following phases of the execution of an experiment:
Level | Before | After |
---|---|---|
Experiment Loading | False | False |
Experiment | False | False |
Steady-state Hypothesis | False | False |
Method | False | False |
Rollback | False | False |
Activities | False | False |
In addition, the controls may define the followings:
Level | Enabled |
---|---|
Validate Control | False |
Configure Control | True |
Cleanup Control | False |
To use this control module, please add the following section to your experiment:
{
"controls": [
{
"name": "chaosreliably",
"provider": {
"type": "python",
"module": "chaosreliably.controls.autopause"
}
}
]
}
controls:
- name: chaosreliably
provider:
module: chaosreliably.controls.autopause
type: python
This block may also be enabled at any other level (steady-state hypothesis or activity) to focus only on that level.
When enabled at the experiment level, by default, all sub-levels are also applied unless you set the automatic
properties to false
.
chatgpt¶
This module exports controls covering the following phases of the execution of an experiment:
Level | Before | After |
---|---|---|
Experiment Loading | False | False |
Experiment | False | False |
Steady-state Hypothesis | False | False |
Method | False | False |
Rollback | False | False |
Activities | False | False |
In addition, the controls may define the followings:
Level | Enabled |
---|---|
Validate Control | False |
Configure Control | True |
Cleanup Control | False |
To use this control module, please add the following section to your experiment:
{
"controls": [
{
"name": "chaosreliably",
"provider": {
"type": "python",
"module": "chaosreliably.controls.chatgpt"
}
}
]
}
controls:
- name: chaosreliably
provider:
module: chaosreliably.controls.chatgpt
type: python
This block may also be enabled at any other level (steady-state hypothesis or activity) to focus only on that level.
When enabled at the experiment level, by default, all sub-levels are also applied unless you set the automatic
properties to false
.
experiment¶
This module exports controls covering the following phases of the execution of an experiment:
Level | Before | After |
---|---|---|
Experiment Loading | False | False |
Experiment | False | False |
Steady-state Hypothesis | False | False |
Method | False | False |
Rollback | False | False |
Activities | False | False |
In addition, the controls may define the followings:
Level | Enabled |
---|---|
Validate Control | False |
Configure Control | True |
Cleanup Control | False |
To use this control module, please add the following section to your experiment:
{
"controls": [
{
"name": "chaosreliably",
"provider": {
"type": "python",
"module": "chaosreliably.controls.experiment"
}
}
]
}
controls:
- name: chaosreliably
provider:
module: chaosreliably.controls.experiment
type: python
This block may also be enabled at any other level (steady-state hypothesis or activity) to focus only on that level.
When enabled at the experiment level, by default, all sub-levels are also applied unless you set the automatic
properties to false
.
metrics¶
This module exports controls covering the following phases of the execution of an experiment:
Level | Before | After |
---|---|---|
Experiment Loading | False | False |
Experiment | False | False |
Steady-state Hypothesis | False | False |
Method | False | False |
Rollback | False | False |
Activities | False | False |
In addition, the controls may define the followings:
Level | Enabled |
---|---|
Validate Control | False |
Configure Control | True |
Cleanup Control | False |
To use this control module, please add the following section to your experiment:
{
"controls": [
{
"name": "chaosreliably",
"provider": {
"type": "python",
"module": "chaosreliably.controls.metrics"
}
}
]
}
controls:
- name: chaosreliably
provider:
module: chaosreliably.controls.metrics
type: python
This block may also be enabled at any other level (steady-state hypothesis or activity) to focus only on that level.
When enabled at the experiment level, by default, all sub-levels are also applied unless you set the automatic
properties to false
.
prechecks¶
This module exports controls covering the following phases of the execution of an experiment:
Level | Before | After |
---|---|---|
Experiment Loading | False | False |
Experiment | False | False |
Steady-state Hypothesis | False | False |
Method | False | False |
Rollback | False | False |
Activities | False | False |
In addition, the controls may define the followings:
Level | Enabled |
---|---|
Validate Control | False |
Configure Control | True |
Cleanup Control | False |
To use this control module, please add the following section to your experiment:
{
"controls": [
{
"name": "chaosreliably",
"provider": {
"type": "python",
"module": "chaosreliably.controls.prechecks"
}
}
]
}
controls:
- name: chaosreliably
provider:
module: chaosreliably.controls.prechecks
type: python
This block may also be enabled at any other level (steady-state hypothesis or activity) to focus only on that level.
When enabled at the experiment level, by default, all sub-levels are also applied unless you set the automatic
properties to false
.
safeguard¶
This module exports controls covering the following phases of the execution of an experiment:
Level | Before | After |
---|---|---|
Experiment Loading | False | False |
Experiment | False | False |
Steady-state Hypothesis | False | False |
Method | False | False |
Rollback | False | False |
Activities | False | False |
In addition, the controls may define the followings:
Level | Enabled |
---|---|
Validate Control | False |
Configure Control | True |
Cleanup Control | False |
To use this control module, please add the following section to your experiment:
{
"controls": [
{
"name": "chaosreliably",
"provider": {
"type": "python",
"module": "chaosreliably.controls.safeguard"
}
}
]
}
controls:
- name: chaosreliably
provider:
module: chaosreliably.controls.safeguard
type: python
This block may also be enabled at any other level (steady-state hypothesis or activity) to focus only on that level.
When enabled at the experiment level, by default, all sub-levels are also applied unless you set the automatic
properties to false
.
Exported Activities¶
dns¶
dns_response_is_superset
¶
Type | tolerance |
Module | chaosreliably.activities.dns.tolerances |
Name | dns_response_is_superset |
Return | boolean |
Validates the response from the DNS resolve_name
probe is a superset of the given set of values.
Signature:
def dns_response_is_superset(expect: List[str],
value: List[str] = None) -> bool:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
expect | list | Yes | |
value | list | null | No |
Tolerances declare the value
argument which is automatically injected by Chaos Toolkit as the output of the probe they are evaluating.
Usage:
{
"steady-state-hypothesis": {
"title": "...",
"probes": [
{
"type": "probe",
"tolerance": {
"name": "dns-response-is-superset",
"type": "tolerance",
"provider": {
"type": "python",
"module": "chaosreliably.activities.dns.tolerances",
"func": "dns_response_is_superset",
"arguments": {
"expect": []
}
}
},
"...": "..."
}
]
}
}
steady-state-hypothesis:
probes:
- '...': '...'
tolerance:
name: dns-response-is-superset
provider:
arguments:
expect: []
func: dns_response_is_superset
module: chaosreliably.activities.dns.tolerances
type: python
type: tolerance
type: probe
title: '...'
dns_response_must_be_equal
¶
Type | tolerance |
Module | chaosreliably.activities.dns.tolerances |
Name | dns_response_must_be_equal |
Return | boolean |
Validates the response from the DNS resolve_name
probe is exactly equal to the given set.
Signature:
def dns_response_must_be_equal(expect: List[str],
value: List[str] = None) -> bool:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
expect | list | Yes | |
value | list | null | No |
Tolerances declare the value
argument which is automatically injected by Chaos Toolkit as the output of the probe they are evaluating.
Usage:
{
"steady-state-hypothesis": {
"title": "...",
"probes": [
{
"type": "probe",
"tolerance": {
"name": "dns-response-must-be-equal",
"type": "tolerance",
"provider": {
"type": "python",
"module": "chaosreliably.activities.dns.tolerances",
"func": "dns_response_must_be_equal",
"arguments": {
"expect": []
}
}
},
"...": "..."
}
]
}
}
steady-state-hypothesis:
probes:
- '...': '...'
tolerance:
name: dns-response-must-be-equal
provider:
arguments:
expect: []
func: dns_response_must_be_equal
module: chaosreliably.activities.dns.tolerances
type: python
type: tolerance
type: probe
title: '...'
resolve_name
¶
Type | probe |
Module | chaosreliably.activities.dns.probes |
Name | resolve_name |
Return | list |
Resolve a domain for a specific type from the given nameservers.
Signature:
def resolve_name(domain: str,
nameservers: Sequence[str] = ('8.8.8.8', ),
resolve_type: str = 'A') -> List[str]:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
domain | string | Yes | |
nameservers | object | [“8.8.8.8”] | No |
resolve_type | string | “A” | No |
Usage:
{
"name": "resolve-name",
"type": "probe",
"provider": {
"type": "python",
"module": "chaosreliably.activities.dns.probes",
"func": "resolve_name",
"arguments": {
"domain": ""
}
}
}
name: resolve-name
provider:
arguments:
domain: ''
func: resolve_name
module: chaosreliably.activities.dns.probes
type: python
type: probe
gh¶
cancel_workflow_run
¶
Type | action |
Module | chaosreliably.activities.gh.actions |
Name | cancel_workflow_run |
Return | mapping |
Cancels a GitHub Workflow run.
The target run is chosen from the list of workflow runs matching the given parameters.
To refine the choice, you can set commit_message_pattern
which is a regex matching the commit message that triggered the event.
If you set at_random
, a run will be picked from the matching list randomly. otherwise, the first match will be used.
You may also filter down by workflow_id
to ensure only runs of a specific workflow are considered.
Finally, if you know the workflow_run_id
you may directly target it.
See the parameters meaning and values at: https://docs.github.com/en/rest/actions/workflow-runs?apiVersion=2022-11-28#list-workflow-runs-for-a-repository
Signature:
def cancel_workflow_run(
repo: str,
at_random: bool = False,
commit_message_pattern: Optional[str] = None,
actor: Optional[str] = None,
branch: str = 'main',
event: str = 'push',
status: str = 'in_progress',
window: str = '5d',
workflow_id: Optional[str] = None,
workflow_run_id: Optional[str] = None,
exclude_pull_requests: bool = False,
configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None) -> Dict[str, Any]:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
repo | string | Yes | |
at_random | boolean | false | No |
commit_message_pattern | object | null | No |
actor | object | null | No |
branch | string | “main” | No |
event | string | “push” | No |
status | string | “in_progress” | No |
window | string | “5d” | No |
workflow_id | object | null | No |
workflow_run_id | object | null | No |
exclude_pull_requests | boolean | false | No |
Usage:
{
"name": "cancel-workflow-run",
"type": "action",
"provider": {
"type": "python",
"module": "chaosreliably.activities.gh.actions",
"func": "cancel_workflow_run",
"arguments": {
"repo": ""
}
}
}
name: cancel-workflow-run
provider:
arguments:
repo: ''
func: cancel_workflow_run
module: chaosreliably.activities.gh.actions
type: python
type: action
closed_pr_ratio
¶
Type | probe |
Module | chaosreliably.activities.gh.probes |
Name | closed_pr_ratio |
Return | number |
Computes a ratio of closed PRs during the given window
in a repo
.
By default, only computes the ratio for PRs that were opened and closed during the given period. When only_opened_and_closed_during_window
is not set, this computes the ratio for closed PRs in the period against all still opened PRs, whether they were opened before the period started or not.
The former is a measure of latency for teams while the latter is more the throughput of the team.
The repo
should be given as owner/repo
and the window should be given as a pattern like this: <int>s|m|d|w
(seconds, minutes, days, weeks).
Signature:
def closed_pr_ratio(repo: str,
base: str = 'main',
only_opened_and_closed_during_window: bool = True,
window: str = '5d',
configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None) -> float:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
repo | string | Yes | |
base | string | “main” | No |
only_opened_and_closed_during_window | boolean | true | No |
window | string | “5d” | No |
Usage:
{
"name": "closed-pr-ratio",
"type": "probe",
"provider": {
"type": "python",
"module": "chaosreliably.activities.gh.probes",
"func": "closed_pr_ratio",
"arguments": {
"repo": ""
}
}
}
name: closed-pr-ratio
provider:
arguments:
repo: ''
func: closed_pr_ratio
module: chaosreliably.activities.gh.probes
type: python
type: probe
get_workflow_most_recent_run
¶
Type | probe |
Module | chaosreliably.activities.gh.probes |
Name | get_workflow_most_recent_run |
Return | Optional[Dict[str, Any]] |
Get the most run of GitHub Workflow.
If no runs are returned when there should be, please review if GitHub has fixed https://github.com/orgs/community/discussions/53266
See the parameters meaning and values at: https://docs.github.com/en/rest/actions/workflow-runs?apiVersion=2022-11-28#list-workflow-runs-for-a-workflow
Signature:
def get_workflow_most_recent_run(
repo: str,
workflow_id: str,
actor: Optional[str] = None,
branch: str = 'main',
event: str = 'push',
status: str = 'in_progress',
exclude_pull_requests: bool = False,
configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None) -> Optional[Dict[str, Any]]:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
repo | string | Yes | |
workflow_id | string | Yes | |
actor | object | null | No |
branch | string | “main” | No |
event | string | “push” | No |
status | string | “in_progress” | No |
exclude_pull_requests | boolean | false | No |
Usage:
{
"name": "get-workflow-most-recent-run",
"type": "probe",
"provider": {
"type": "python",
"module": "chaosreliably.activities.gh.probes",
"func": "get_workflow_most_recent_run",
"arguments": {
"repo": "",
"workflow_id": ""
}
}
}
name: get-workflow-most-recent-run
provider:
arguments:
repo: ''
workflow_id: ''
func: get_workflow_most_recent_run
module: chaosreliably.activities.gh.probes
type: python
type: probe
get_workflow_most_recent_run_billing_usage
¶
Type | probe |
Module | chaosreliably.activities.gh.probes |
Name | get_workflow_most_recent_run_billing_usage |
Return | Optional[Dict[str, Any]] |
Get the most run of GitHub Workflow.
See the parameters meaning and values at: https://docs.github.com/en/rest/actions/workflow-runs?apiVersion=2022-11-28#get-workflow-run-usage
Signature:
def get_workflow_most_recent_run_billing_usage(
repo: str,
workflow_id: str,
actor: Optional[str] = None,
branch: str = 'main',
event: str = 'push',
status: str = 'in_progress',
exclude_pull_requests: bool = False,
configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None) -> Optional[Dict[str, Any]]:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
repo | string | Yes | |
workflow_id | string | Yes | |
actor | object | null | No |
branch | string | “main” | No |
event | string | “push” | No |
status | string | “in_progress” | No |
exclude_pull_requests | boolean | false | No |
Usage:
{
"name": "get-workflow-most-recent-run-billing-usage",
"type": "probe",
"provider": {
"type": "python",
"module": "chaosreliably.activities.gh.probes",
"func": "get_workflow_most_recent_run_billing_usage",
"arguments": {
"repo": "",
"workflow_id": ""
}
}
}
name: get-workflow-most-recent-run-billing-usage
provider:
arguments:
repo: ''
workflow_id: ''
func: get_workflow_most_recent_run_billing_usage
module: chaosreliably.activities.gh.probes
type: python
type: probe
list_workflow_runs
¶
Type | probe |
Module | chaosreliably.activities.gh.probes |
Name | list_workflow_runs |
Return | mapping |
List GitHub Workflow runs.
If no runs are returned when there should be, please review if GitHub has fixed https://github.com/orgs/community/discussions/53266
See the parameters meaning and values at: https://docs.github.com/en/rest/actions/workflow-runs?apiVersion=2022-11-28#list-workflow-runs-for-a-repository
Signature:
def list_workflow_runs(
repo: str,
actor: Optional[str] = None,
branch: str = 'main',
event: str = 'push',
status: str = 'in_progress',
window: str = '5d',
exclude_pull_requests: bool = False,
configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None) -> Dict[str, Any]:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
repo | string | Yes | |
actor | object | null | No |
branch | string | “main” | No |
event | string | “push” | No |
status | string | “in_progress” | No |
window | string | “5d” | No |
exclude_pull_requests | boolean | false | No |
Usage:
{
"name": "list-workflow-runs",
"type": "probe",
"provider": {
"type": "python",
"module": "chaosreliably.activities.gh.probes",
"func": "list_workflow_runs",
"arguments": {
"repo": ""
}
}
}
name: list-workflow-runs
provider:
arguments:
repo: ''
func: list_workflow_runs
module: chaosreliably.activities.gh.probes
type: python
type: probe
percentile_under
¶
Type | tolerance |
Module | chaosreliably.activities.gh.tolerances |
Name | percentile_under |
Return | boolean |
Computes that the values under percentile
are below the given duration.
For instance, for PR durations, this could be helpful to understand that 99% of them were closed in less than the given duration.
v = pr_duration("chaostoolkit/chaostoolkit", "master", window=None)
p = percentile_under(0.99, duration="1d", value=v)
Signature:
def percentile_under(percentile: float,
duration: str = '1d',
value: Optional[List[Union[int, float]]] = None) -> bool:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
percentile | number | Yes | |
duration | string | “1d” | No |
value | object | null | No |
Tolerances declare the value
argument which is automatically injected by Chaos Toolkit as the output of the probe they are evaluating.
Usage:
{
"steady-state-hypothesis": {
"title": "...",
"probes": [
{
"type": "probe",
"tolerance": {
"name": "percentile-under",
"type": "tolerance",
"provider": {
"type": "python",
"module": "chaosreliably.activities.gh.tolerances",
"func": "percentile_under",
"arguments": {
"percentile": null
}
}
},
"...": "..."
}
]
}
}
steady-state-hypothesis:
probes:
- '...': '...'
tolerance:
name: percentile-under
provider:
arguments:
percentile: null
func: percentile_under
module: chaosreliably.activities.gh.tolerances
type: python
type: tolerance
type: probe
title: '...'
pr_duration
¶
Type | probe |
Module | chaosreliably.activities.gh.probes |
Name | pr_duration |
Return | list |
Get a list of opened pull-requests durations.
If you don’t set a window (by setting window
to None
), then it returns the duration of all PRs that were ever opened in this repository. Otherwise, only return the durations for PRs that were opened or closed within that window.
The repo
should be given as owner/repo
and the window should be given as a pattern like this: <int>s|m|d|w
(seconds, minutes, days, weeks).
Signature:
def pr_duration(repo: str,
base: str = 'main',
window: Optional[str] = '5d',
configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None) -> List[float]:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
repo | string | Yes | |
base | string | “main” | No |
window | object | “5d” | No |
Usage:
{
"name": "pr-duration",
"type": "probe",
"provider": {
"type": "python",
"module": "chaosreliably.activities.gh.probes",
"func": "pr_duration",
"arguments": {
"repo": ""
}
}
}
name: pr-duration
provider:
arguments:
repo: ''
func: pr_duration
module: chaosreliably.activities.gh.probes
type: python
type: probe
ratio_above
¶
Type | tolerance |
Module | chaosreliably.activities.gh.tolerances |
Name | ratio_above |
Return | boolean |
Validates the ratio returned by a probe is strictly greater than the target
.
Signature:
def ratio_above(target: float, value: float = 0.0) -> bool:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
target | number | Yes | |
value | number | 0.0 | No |
Tolerances declare the value
argument which is automatically injected by Chaos Toolkit as the output of the probe they are evaluating.
Usage:
{
"steady-state-hypothesis": {
"title": "...",
"probes": [
{
"type": "probe",
"tolerance": {
"name": "ratio-above",
"type": "tolerance",
"provider": {
"type": "python",
"module": "chaosreliably.activities.gh.tolerances",
"func": "ratio_above",
"arguments": {
"target": null
}
}
},
"...": "..."
}
]
}
}
steady-state-hypothesis:
probes:
- '...': '...'
tolerance:
name: ratio-above
provider:
arguments:
target: null
func: ratio_above
module: chaosreliably.activities.gh.tolerances
type: python
type: tolerance
type: probe
title: '...'
ratio_above_or_equal
¶
Type | tolerance |
Module | chaosreliably.activities.gh.tolerances |
Name | ratio_above_or_equal |
Return | boolean |
Validates the ratio returned by a probe is greater than the target
.
Signature:
def ratio_above_or_equal(target: float, value: float = 0.0) -> bool:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
target | number | Yes | |
value | number | 0.0 | No |
Tolerances declare the value
argument which is automatically injected by Chaos Toolkit as the output of the probe they are evaluating.
Usage:
{
"steady-state-hypothesis": {
"title": "...",
"probes": [
{
"type": "probe",
"tolerance": {
"name": "ratio-above-or-equal",
"type": "tolerance",
"provider": {
"type": "python",
"module": "chaosreliably.activities.gh.tolerances",
"func": "ratio_above_or_equal",
"arguments": {
"target": null
}
}
},
"...": "..."
}
]
}
}
steady-state-hypothesis:
probes:
- '...': '...'
tolerance:
name: ratio-above-or-equal
provider:
arguments:
target: null
func: ratio_above_or_equal
module: chaosreliably.activities.gh.tolerances
type: python
type: tolerance
type: probe
title: '...'
ratio_under
¶
Type | tolerance |
Module | chaosreliably.activities.gh.tolerances |
Name | ratio_under |
Return | boolean |
Validates the ratio returned by a probe is strictly below the target
.
Signature:
def ratio_under(target: float, value: float = 0.0) -> bool:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
target | number | Yes | |
value | number | 0.0 | No |
Tolerances declare the value
argument which is automatically injected by Chaos Toolkit as the output of the probe they are evaluating.
Usage:
{
"steady-state-hypothesis": {
"title": "...",
"probes": [
{
"type": "probe",
"tolerance": {
"name": "ratio-under",
"type": "tolerance",
"provider": {
"type": "python",
"module": "chaosreliably.activities.gh.tolerances",
"func": "ratio_under",
"arguments": {
"target": null
}
}
},
"...": "..."
}
]
}
}
steady-state-hypothesis:
probes:
- '...': '...'
tolerance:
name: ratio-under
provider:
arguments:
target: null
func: ratio_under
module: chaosreliably.activities.gh.tolerances
type: python
type: tolerance
type: probe
title: '...'
ratio_under_or_equal
¶
Type | tolerance |
Module | chaosreliably.activities.gh.tolerances |
Name | ratio_under_or_equal |
Return | boolean |
Validates the ratio returned by a probe is below the target
.
Signature:
def ratio_under_or_equal(target: float, value: float = 0.0) -> bool:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
target | number | Yes | |
value | number | 0.0 | No |
Tolerances declare the value
argument which is automatically injected by Chaos Toolkit as the output of the probe they are evaluating.
Usage:
{
"steady-state-hypothesis": {
"title": "...",
"probes": [
{
"type": "probe",
"tolerance": {
"name": "ratio-under-or-equal",
"type": "tolerance",
"provider": {
"type": "python",
"module": "chaosreliably.activities.gh.tolerances",
"func": "ratio_under_or_equal",
"arguments": {
"target": null
}
}
},
"...": "..."
}
]
}
}
steady-state-hypothesis:
probes:
- '...': '...'
tolerance:
name: ratio-under-or-equal
provider:
arguments:
target: null
func: ratio_under_or_equal
module: chaosreliably.activities.gh.tolerances
type: python
type: tolerance
type: probe
title: '...'
http¶
measure_response_time
¶
Type | probe |
Module | chaosreliably.activities.http.probes |
Name | measure_response_time |
Return | number |
Measure the response time of the GET request to the given URL.
Signature:
def measure_response_time(url: str) -> float:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
url | string | Yes |
Usage:
{
"name": "measure-response-time",
"type": "probe",
"provider": {
"type": "python",
"module": "chaosreliably.activities.http.probes",
"func": "measure_response_time",
"arguments": {
"url": ""
}
}
}
name: measure-response-time
provider:
arguments:
url: ''
func: measure_response_time
module: chaosreliably.activities.http.probes
type: python
type: probe
response_time_must_be_under
¶
Type | tolerance |
Module | chaosreliably.activities.http.tolerances |
Name | response_time_must_be_under |
Return | boolean |
Validates the response time is under the given latency.
Use this as the tolerance of the chaosreliably.activities.http.probes.measure_response_time
probe.
Signature:
def response_time_must_be_under(latency: float, value: float = 0.0) -> bool:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
latency | number | Yes | |
value | number | 0.0 | No |
Tolerances declare the value
argument which is automatically injected by Chaos Toolkit as the output of the probe they are evaluating.
Usage:
{
"steady-state-hypothesis": {
"title": "...",
"probes": [
{
"type": "probe",
"tolerance": {
"name": "response-time-must-be-under",
"type": "tolerance",
"provider": {
"type": "python",
"module": "chaosreliably.activities.http.tolerances",
"func": "response_time_must_be_under",
"arguments": {
"latency": null
}
}
},
"...": "..."
}
]
}
}
steady-state-hypothesis:
probes:
- '...': '...'
tolerance:
name: response-time-must-be-under
provider:
arguments:
latency: null
func: response_time_must_be_under
module: chaosreliably.activities.http.tolerances
type: python
type: tolerance
type: probe
title: '...'
load¶
inject_gradual_traffic_into_endpoint
¶
Type | action |
Module | chaosreliably.activities.load.actions |
Name | inject_gradual_traffic_into_endpoint |
Return | mapping |
Load traffic into the given endpoint
. Uses an approach that creates an incremental load into the endpoint rather than swarming it. The point of this action is to ensure your endpoint is active while you perform another action. This you means you likely want to run this action in the background
.
You may set a bearer token if your application uses one to authenticate. Pass test_bearer_token
as a secret key in the secrets
payload.
This action return a dictionary payload of the load test results.
Signature:
def inject_gradual_traffic_into_endpoint(
endpoint: str,
step_duration: int = 5,
step_additional_vu: int = 1,
vu_per_second_rate: int = 1,
test_duration: int = 30,
results_json_filepath: Optional[str] = None,
enable_opentracing: bool = False,
configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None) -> Dict[str, Any]:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
endpoint | string | Yes | |
step_duration | integer | 5 | No |
step_additional_vu | integer | 1 | No |
vu_per_second_rate | integer | 1 | No |
test_duration | integer | 30 | No |
results_json_filepath | object | null | No |
enable_opentracing | boolean | false | No |
Usage:
{
"name": "inject-gradual-traffic-into-endpoint",
"type": "action",
"provider": {
"type": "python",
"module": "chaosreliably.activities.load.actions",
"func": "inject_gradual_traffic_into_endpoint",
"arguments": {
"endpoint": ""
}
}
}
name: inject-gradual-traffic-into-endpoint
provider:
arguments:
endpoint: ''
func: inject_gradual_traffic_into_endpoint
module: chaosreliably.activities.load.actions
type: python
type: action
load_test_result_field_should_be
¶
Type | probe |
Module | chaosreliably.activities.load.probes |
Name | load_test_result_field_should_be |
Return | boolean |
Reads a load test result and compares the field’s value to the expected given value.
If the load test runs against many endpoint, specify which one must be validated by setting the result_item_name
to match the name
field.
Can only be used with the result from inject_gradual_traffic_into_endpoint
Signature:
def load_test_result_field_should_be(
result_filepath: str,
field: str,
expect: int,
result_item_name: Optional[str] = None,
pass_if_file_is_missing: bool = True) -> bool:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
result_filepath | string | Yes | |
field | string | Yes | |
expect | integer | Yes | |
result_item_name | object | null | No |
pass_if_file_is_missing | boolean | true | No |
Usage:
{
"name": "load-test-result-field-should-be",
"type": "probe",
"provider": {
"type": "python",
"module": "chaosreliably.activities.load.probes",
"func": "load_test_result_field_should_be",
"arguments": {
"result_filepath": "",
"field": "",
"expect": 0
}
}
}
name: load-test-result-field-should-be
provider:
arguments:
expect: 0
field: ''
result_filepath: ''
func: load_test_result_field_should_be
module: chaosreliably.activities.load.probes
type: python
type: probe
load_test_result_field_should_be_greater_than
¶
Type | probe |
Module | chaosreliably.activities.load.probes |
Name | load_test_result_field_should_be_greater_than |
Return | boolean |
Reads a load test result and compares the field’s value to greater than the expected given value.
If the load test runs against many endpoint, specify which one must be validated by setting the result_item_name
to match the name
field.
Can only be used with the result from inject_gradual_traffic_into_endpoint
Signature:
def load_test_result_field_should_be_greater_than(
result_filepath: str,
field: str,
expect: int,
result_item_name: Optional[str] = None,
pass_if_file_is_missing: bool = True) -> bool:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
result_filepath | string | Yes | |
field | string | Yes | |
expect | integer | Yes | |
result_item_name | object | null | No |
pass_if_file_is_missing | boolean | true | No |
Usage:
{
"name": "load-test-result-field-should-be-greater-than",
"type": "probe",
"provider": {
"type": "python",
"module": "chaosreliably.activities.load.probes",
"func": "load_test_result_field_should_be_greater_than",
"arguments": {
"result_filepath": "",
"field": "",
"expect": 0
}
}
}
name: load-test-result-field-should-be-greater-than
provider:
arguments:
expect: 0
field: ''
result_filepath: ''
func: load_test_result_field_should_be_greater_than
module: chaosreliably.activities.load.probes
type: python
type: probe
load_test_result_field_should_be_less_than
¶
Type | probe |
Module | chaosreliably.activities.load.probes |
Name | load_test_result_field_should_be_less_than |
Return | boolean |
Reads a load test result and compares the field’s value to less than the expected given value.
If the load test runs against many endpoint, specify which one must be validated by setting the result_item_name
to match the name
field.
Can only be used with the result from inject_gradual_traffic_into_endpoint
Signature:
def load_test_result_field_should_be_less_than(
result_filepath: str,
field: str,
expect: int,
result_item_name: Optional[str] = None,
pass_if_file_is_missing: bool = True) -> bool:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
result_filepath | string | Yes | |
field | string | Yes | |
expect | integer | Yes | |
result_item_name | object | null | No |
pass_if_file_is_missing | boolean | true | No |
Usage:
{
"name": "load-test-result-field-should-be-less-than",
"type": "probe",
"provider": {
"type": "python",
"module": "chaosreliably.activities.load.probes",
"func": "load_test_result_field_should_be_less_than",
"arguments": {
"result_filepath": "",
"field": "",
"expect": 0
}
}
}
name: load-test-result-field-should-be-less-than
provider:
arguments:
expect: 0
field: ''
result_filepath: ''
func: load_test_result_field_should_be_less_than
module: chaosreliably.activities.load.probes
type: python
type: probe
run_load_test
¶
Type | action |
Module | chaosreliably.activities.load.actions |
Name | run_load_test |
Return | mapping |
Run a load test against the given URL.
This action uses oha rather than Locust. It produces a different set of results. Please make sure to have it installed in your PATH
.
Set the test_name
so you can use one of the probe against this action to retrieve its results.
Use the following parameters to adjust the default:
connect_to
pass a column seperated list of addresseshost:port
to connect to instead of the DNS values for the domaininsecure
set to False to communicate with a non-secure TLS serverhost
set a differentHOST
headermethod
the HTTP method to useheaders
a comma-separated list of headers “foo: bar,other: thing”body
the content of the request to send if anycontent_type
the content-type of the request to send if any
Signature:
def run_load_test(url: str,
duration: int = 30,
qps: int = 5,
connect_to: str = '',
insecure: bool = False,
host: str = 'None',
method: str = 'GET',
headers: str = '',
body: str = '',
content_type: str = '',
test_name: str = 'load test') -> Dict[str, Any]:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
url | string | Yes | |
duration | integer | 30 | No |
qps | integer | 5 | No |
connect_to | string | ”“ | No |
insecure | boolean | false | No |
host | string | “None” | No |
method | string | “GET” | No |
headers | string | ”“ | No |
body | string | ”“ | No |
content_type | string | ”“ | No |
test_name | string | “load test” | No |
Usage:
{
"name": "run-load-test",
"type": "action",
"provider": {
"type": "python",
"module": "chaosreliably.activities.load.actions",
"func": "run_load_test",
"arguments": {
"url": ""
}
}
}
name: run-load-test
provider:
arguments:
url: ''
func: run_load_test
module: chaosreliably.activities.load.actions
type: python
type: action
verify_latency_percentile_from_load_test
¶
Type | probe |
Module | chaosreliably.activities.load.probes |
Name | verify_latency_percentile_from_load_test |
Return | boolean |
Verify that the percentile of a load test result, generated with the run_load_test
action, is lower than the expected value.
Signature:
def verify_latency_percentile_from_load_test(
lower_than: float,
percentile: str = 'p99',
test_name: str = 'load test') -> bool:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
lower_than | number | Yes | |
percentile | string | “p99” | No |
test_name | string | “load test” | No |
Usage:
{
"name": "verify-latency-percentile-from-load-test",
"type": "probe",
"provider": {
"type": "python",
"module": "chaosreliably.activities.load.probes",
"func": "verify_latency_percentile_from_load_test",
"arguments": {
"lower_than": null
}
}
}
name: verify-latency-percentile-from-load-test
provider:
arguments:
lower_than: null
func: verify_latency_percentile_from_load_test
module: chaosreliably.activities.load.probes
type: python
type: probe
activities¶
pause_execution
¶
Type | |
Module | chaosreliably.activities.pauses |
Name | pause_execution |
Return | null |
Pause the execution of the experiment until the resume state has been received.
Signature:
def pause_execution(duration: int = 0,
username: str = '',
user_id: str = '') -> None:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
duration | integer | 0 | No |
username | string | ”“ | No |
user_id | string | ”“ | No |
Usage:
{
"name": "pause-execution",
"type": "",
"provider": {
"type": "python",
"module": "chaosreliably.activities.pauses",
"func": "pause_execution"
}
}
name: pause-execution
provider:
func: pause_execution
module: chaosreliably.activities.pauses
type: python
type: ''
safeguard¶
call_endpoint
¶
Type | probe |
Module | chaosreliably.activities.safeguard.probes |
Name | call_endpoint |
Return | boolean |
Signature:
def call_endpoint(url: str,
auth: Optional[str] = None,
configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None) -> bool:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
url | string | Yes | |
auth | object | null | No |
Usage:
{
"name": "call-endpoint",
"type": "probe",
"provider": {
"type": "python",
"module": "chaosreliably.activities.safeguard.probes",
"func": "call_endpoint",
"arguments": {
"url": ""
}
}
}
name: call-endpoint
provider:
arguments:
url: ''
func: call_endpoint
module: chaosreliably.activities.safeguard.probes
type: python
type: probe
slo¶
compute_slo
¶
Type | probe |
Module | chaosreliably.activities.slo.probes |
Name | compute_slo |
Return | list |
Computes the given SLO and return a list of outcomes for each error budget policies in the config
.
This is a wrapper around https://github.com/google/slo-generator so all of its documentation applies for the definition of the slo
and config
objects. The former contains the the SLO description while the latter describes where to source SLIs from and the error budget policies.
The most notable difference is that we disable any exporters so there is no need to define them in your objects.
Signature:
def compute_slo(
slo: Dict[str, Any],
config: Dict[str, Any],
configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None) -> List[Dict[str, Any]]:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
slo | mapping | Yes | |
config | mapping | Yes |
Usage:
{
"name": "compute-slo",
"type": "probe",
"provider": {
"type": "python",
"module": "chaosreliably.activities.slo.probes",
"func": "compute_slo",
"arguments": {
"slo": {},
"config": {}
}
}
}
name: compute-slo
provider:
arguments:
config: {}
slo: {}
func: compute_slo
module: chaosreliably.activities.slo.probes
type: python
type: probe
has_error_budget_left
¶
Type | tolerance |
Module | chaosreliably.activities.slo.tolerances |
Name | has_error_budget_left |
Return | boolean |
Validate there is enough error budget left from compute_slo returned value
Signature:
def has_error_budget_left(name: str,
value: Optional[List[Dict[str,
Any]]] = None) -> bool:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
name | string | Yes | |
value | object | null | No |
Tolerances declare the value
argument which is automatically injected by Chaos Toolkit as the output of the probe they are evaluating.
Usage:
{
"steady-state-hypothesis": {
"title": "...",
"probes": [
{
"type": "probe",
"tolerance": {
"name": "has-error-budget-left",
"type": "tolerance",
"provider": {
"type": "python",
"module": "chaosreliably.activities.slo.tolerances",
"func": "has_error_budget_left",
"arguments": {
"name": ""
}
}
},
"...": "..."
}
]
}
}
steady-state-hypothesis:
probes:
- '...': '...'
tolerance:
name: has-error-budget-left
provider:
arguments:
name: ''
func: has_error_budget_left
module: chaosreliably.activities.slo.tolerances
type: python
type: tolerance
type: probe
title: '...'
tls¶
expire_in_more_than
¶
Type | tolerance |
Module | chaosreliably.activities.tls.tolerances |
Name | expire_in_more_than |
Return | boolean |
Verifies that the certificate expires in more than the given duration.
The duration
is expressed as followed: "s"
, "m"
, "d"
or "w"
. For example, in more than a week can be expressed as "7d"
or "1w"
.
Signature:
def expire_in_more_than(duration: str = '7d',
value: Optional[Dict[str, Any]] = None) -> bool:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
duration | string | “7d” | No |
value | object | null | No |
Tolerances declare the value
argument which is automatically injected by Chaos Toolkit as the output of the probe they are evaluating.
Usage:
{
"steady-state-hypothesis": {
"title": "...",
"probes": [
{
"type": "probe",
"tolerance": {
"name": "expire-in-more-than",
"type": "tolerance",
"provider": {
"type": "python",
"module": "chaosreliably.activities.tls.tolerances",
"func": "expire_in_more_than"
}
},
"...": "..."
}
]
}
}
steady-state-hypothesis:
probes:
- '...': '...'
tolerance:
name: expire-in-more-than
provider:
func: expire_in_more_than
module: chaosreliably.activities.tls.tolerances
type: python
type: tolerance
type: probe
title: '...'
get_certificate_info
¶
Type | probe |
Module | chaosreliably.activities.tls.probes |
Name | get_certificate_info |
Return | mapping |
Extract certificate information from the remote connection.
Signature:
def get_certificate_info(host: str, port: int = 443) -> Dict[str, Any]:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
host | string | Yes | |
port | integer | 443 | No |
Usage:
{
"name": "get-certificate-info",
"type": "probe",
"provider": {
"type": "python",
"module": "chaosreliably.activities.tls.probes",
"func": "get_certificate_info",
"arguments": {
"host": ""
}
}
}
name: get-certificate-info
provider:
arguments:
host: ''
func: get_certificate_info
module: chaosreliably.activities.tls.probes
type: python
type: probe
has_fingerprint
¶
Type | tolerance |
Module | chaosreliably.activities.tls.tolerances |
Name | has_fingerprint |
Return | boolean |
Validate the fingerprint of the certificate. The hash is one of "md5"
, "sha1"
or "sha256"
.
Signature:
def has_fingerprint(fingerprint: str,
hash: str = 'sha256',
value: Optional[Dict[str, Any]] = None) -> bool:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
fingerprint | string | Yes | |
hash | string | “sha256” | No |
value | object | null | No |
Tolerances declare the value
argument which is automatically injected by Chaos Toolkit as the output of the probe they are evaluating.
Usage:
{
"steady-state-hypothesis": {
"title": "...",
"probes": [
{
"type": "probe",
"tolerance": {
"name": "has-fingerprint",
"type": "tolerance",
"provider": {
"type": "python",
"module": "chaosreliably.activities.tls.tolerances",
"func": "has_fingerprint",
"arguments": {
"fingerprint": ""
}
}
},
"...": "..."
}
]
}
}
steady-state-hypothesis:
probes:
- '...': '...'
tolerance:
name: has-fingerprint
provider:
arguments:
fingerprint: ''
func: has_fingerprint
module: chaosreliably.activities.tls.tolerances
type: python
type: tolerance
type: probe
title: '...'
has_subject_alt_names
¶
Type | tolerance |
Module | chaosreliably.activities.tls.tolerances |
Name | has_subject_alt_names |
Return | boolean |
Validates the certficate covers at least the given list of alternative names. If strict
is set, then the list of exported names must be exactly the provided ones.
Signature:
def has_subject_alt_names(alt_names: List[str],
strict: bool = True,
value: Optional[Dict[str, Any]] = None) -> bool:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
alt_names | list | Yes | |
strict | boolean | true | No |
value | object | null | No |
Tolerances declare the value
argument which is automatically injected by Chaos Toolkit as the output of the probe they are evaluating.
Usage:
{
"steady-state-hypothesis": {
"title": "...",
"probes": [
{
"type": "probe",
"tolerance": {
"name": "has-subject-alt-names",
"type": "tolerance",
"provider": {
"type": "python",
"module": "chaosreliably.activities.tls.tolerances",
"func": "has_subject_alt_names",
"arguments": {
"alt_names": []
}
}
},
"...": "..."
}
]
}
}
steady-state-hypothesis:
probes:
- '...': '...'
tolerance:
name: has-subject-alt-names
provider:
arguments:
alt_names: []
func: has_subject_alt_names
module: chaosreliably.activities.tls.tolerances
type: python
type: tolerance
type: probe
title: '...'
is_issued_by
¶
Type | tolerance |
Module | chaosreliably.activities.tls.tolerances |
Name | is_issued_by |
Return | boolean |
Validate the issue of the certificate.
Signature:
def is_issued_by(issuer: str, value: Optional[Dict[str, Any]] = None) -> bool:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
issuer | string | Yes | |
value | object | null | No |
Tolerances declare the value
argument which is automatically injected by Chaos Toolkit as the output of the probe they are evaluating.
Usage:
{
"steady-state-hypothesis": {
"title": "...",
"probes": [
{
"type": "probe",
"tolerance": {
"name": "is-issued-by",
"type": "tolerance",
"provider": {
"type": "python",
"module": "chaosreliably.activities.tls.tolerances",
"func": "is_issued_by",
"arguments": {
"issuer": ""
}
}
},
"...": "..."
}
]
}
}
steady-state-hypothesis:
probes:
- '...': '...'
tolerance:
name: is-issued-by
provider:
arguments:
issuer: ''
func: is_issued_by
module: chaosreliably.activities.tls.tolerances
type: python
type: tolerance
type: probe
title: '...'
verify_certificate
¶
Type | probe |
Module | chaosreliably.activities.tls.probes |
Name | verify_certificate |
Return | boolean |
Performs a range of checks on the certificate of the remote endpoint:
- that we are beyond a certain duration of the certificate expiricy date
- that the certificate exports the right alternative names
If any of these values is not set (the default), the according check is not performed. This doesn’t apply to the expiration date which is always checked.
Signature:
def verify_certificate(host: str,
port: int = 443,
expire_after: str = '7d',
alt_names: Optional[List[str]] = None) -> bool:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
host | string | Yes | |
port | integer | 443 | No |
expire_after | string | “7d” | No |
alt_names | object | null | No |
Usage:
{
"name": "verify-certificate",
"type": "probe",
"provider": {
"type": "python",
"module": "chaosreliably.activities.tls.probes",
"func": "verify_certificate",
"arguments": {
"host": ""
}
}
}
name: verify-certificate
provider:
arguments:
host: ''
func: verify_certificate
module: chaosreliably.activities.tls.probes
type: python
type: probe
verify_tls_cert
¶
Type | tolerance |
Module | chaosreliably.activities.tls.tolerances |
Name | verify_tls_cert |
Return | boolean |
Performs a range of checks on the certificate of the remote endpoint:
- that we are beyond a certain duration of the certificate expiricy date
- that the certificate exports the right alternative names
- the fingerprint of the certificate
- the certificate was issued by the right issuer
If any of these values is not set (the default), the according check is not performed. This doesn’t apply to the expiration date which is always checked.
Signature:
def verify_tls_cert(expire_after: str = '7d',
alt_names: Optional[List[str]] = None,
fingerprint_sha256: Optional[str] = None,
issuer: Optional[str] = None,
value: Optional[Dict[str, Any]] = None) -> bool:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
expire_after | string | “7d” | No |
alt_names | object | null | No |
fingerprint_sha256 | object | null | No |
issuer | object | null | No |
value | object | null | No |
Tolerances declare the value
argument which is automatically injected by Chaos Toolkit as the output of the probe they are evaluating.
Usage:
{
"steady-state-hypothesis": {
"title": "...",
"probes": [
{
"type": "probe",
"tolerance": {
"name": "verify-tls-cert",
"type": "tolerance",
"provider": {
"type": "python",
"module": "chaosreliably.activities.tls.tolerances",
"func": "verify_tls_cert"
}
},
"...": "..."
}
]
}
}
steady-state-hypothesis:
probes:
- '...': '...'
tolerance:
name: verify-tls-cert
provider:
func: verify_tls_cert
module: chaosreliably.activities.tls.tolerances
type: python
type: tolerance
type: probe
title: '...'
controls¶
capture¶
start_capturing
¶
Type | |
Module | chaosreliably.controls.capture.slack |
Name | start_capturing |
Return | null |
Signature:
def start_capturing(experiment: Dict[str, Any],
configuration: Dict[str, Dict[str, str]],
secrets: Dict[str, Dict[str, str]]) -> None:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
experiment | mapping | Yes |
Usage:
{
"name": "start-capturing",
"type": "",
"provider": {
"type": "python",
"module": "chaosreliably.controls.capture.slack",
"func": "start_capturing",
"arguments": {
"experiment": {}
}
}
}
name: start-capturing
provider:
arguments:
experiment: {}
func: start_capturing
module: chaosreliably.controls.capture.slack
type: python
type: ''
stop_capturing
¶
Type | |
Module | chaosreliably.controls.capture.slack |
Name | stop_capturing |
Return | Optional[Dict[str, Any]] |
Signature:
def stop_capturing(
start: datetime.datetime, end: datetime.datetime,
experiment: Dict[str, Any], configuration: Dict[str, Dict[str, str]],
secrets: Dict[str, Dict[str, str]]) -> Optional[Dict[str, Any]]:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
start | object | Yes | |
end | object | Yes | |
experiment | mapping | Yes |
Usage:
{
"name": "stop-capturing",
"type": "",
"provider": {
"type": "python",
"module": "chaosreliably.controls.capture.slack",
"func": "stop_capturing",
"arguments": {
"start": null,
"end": null,
"experiment": {}
}
}
}
name: stop-capturing
provider:
arguments:
end: null
experiment: {}
start: null
func: stop_capturing
module: chaosreliably.controls.capture.slack
type: python
type: ''