-
Notifications
You must be signed in to change notification settings - Fork 9
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'main' into rachel.yang/baggage-tests-fix
- Loading branch information
Showing
339 changed files
with
16,580 additions
and
262,008 deletions.
There are no files selected for viewing
Validating CODEOWNERS rules …
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,12 +1,58 @@ | ||
## System tests | ||
## What is system-tests? | ||
|
||
Workbench designed to run advanced tests (integration, smoke, functional, fuzzing and performance) | ||
A workbench designed to run advanced tests (integration, smoke, functional, fuzzing and performance) against our suite of dd-trace libraries. | ||
|
||
## Requirements | ||
|
||
`bash`, `docker` and `python3.12`. More infos in the [documentation](https://github.com/DataDog/system-tests/blob/main/docs/execute/requirements.md) | ||
`bash`, `docker` and `python3.12`. | ||
|
||
## How to use | ||
We recommend to install python3.12 via pyenv: [pyenv](https://github.com/pyenv/pyenv#getting-pyenv). Pyenv is a tool for managing multiple python versions and keeping system tests dependencies isolated to their virtual environment. If you don't wish to install pyenv, instructions for downloading python 3.12 on your machine can be found below: | ||
|
||
#### Ubuntu | ||
|
||
``` | ||
sudo add-apt-repository ppa:deadsnakes/ppa | ||
sudo apt update | ||
sudo apt install python3.12 python3.12-distutils python3.12-venv python3.12-dev | ||
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py | ||
python3.12 get-pip.py | ||
./build.sh -i runner | ||
``` | ||
|
||
#### Windows | ||
|
||
TODO | ||
|
||
#### Mac | ||
|
||
For Homebrew users : | ||
|
||
``` | ||
brew install [email protected] | ||
pip3.12 install virtualenv | ||
``` | ||
|
||
## Getting started | ||
|
||
### Run a test | ||
|
||
Run a test according to the [run documentation](docs/execute/run.md); note that if you're running an [end to end test](docs/scenarios/README.md#end-to-end-scenarios), you will need to build the test infrastructure according to the [build documentation](docs/execute/build.md) before you can run the test. | ||
|
||
Tests will only run if they are not disabled; see how tests are disabled in [skip-tests.md](docs/edit/skip-tests.md) and how tests are enabled in [enable-test.md](docs/edit/enable-test.md). Alternatively, you can force a disabled test to execute according to the [force-execute documentation](docs/execute/force-execute.md). | ||
|
||
![Output on success](./utils/assets/output.png?raw=true) | ||
|
||
### Edit a test | ||
|
||
Refer to the [edit docs](docs/edit/README.md). | ||
|
||
### Understand the tests | ||
|
||
**[Complete documentation](https://github.com/DataDog/system-tests/blob/main/docs)** | ||
|
||
System-tests supports various scenarios for running tests; read more about the different kinds of tests that this repo covers in [scenarios/README.md](scenarios/README.md). | ||
|
||
Understand the test architecture at the [architectural overview](https://github.com/DataDog/system-tests/blob/main/docs/architecture/overview.md). | ||
|
||
```mermaid | ||
flowchart TD | ||
|
@@ -24,12 +70,3 @@ flowchart TD | |
OUTPUT[Test output in bash] | ||
LOGS[Logs directory per scenario] | ||
``` | ||
|
||
Understand the parts of the tests at the [architectural overview](https://github.com/DataDog/system-tests/blob/main/docs/architecture/overview.md). | ||
|
||
More details in [build documentation](https://github.com/DataDog/system-tests/blob/main/docs/execute/build.md) and [run documentation](https://github.com/DataDog/system-tests/blob/main/docs/execute/run.md). | ||
|
||
![Output on success](./utils/assets/output.png?raw=true) | ||
|
||
**[Complete documentation](https://github.com/DataDog/system-tests/blob/main/docs)** | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,62 +1,27 @@ | ||
## Run the test loccally | ||
System tests allow developers define scenarios and ensure datadog libraries produce consistent telemetry (that is, traces, metrics, profiles, etc...). This "edit" section addresses the following use-cases: | ||
|
||
Please have a look on the [weblog](../execute/) | ||
1. Adding a new test (maybe to support a new or existing feature) | ||
2. Modifying an existing test, whether that's modifying the test client (test*.py files) or the weblog and/or parametric apps that serve the test client requests) | ||
3. Enabling/disabling tests for libraries under various conditions | ||
|
||
```bash | ||
./build.sh python # or any another library. This step can be ran only once, as long as you do not need a modification on the lib/agent | ||
./run.sh | ||
``` | ||
**Note: Anytime you make changes and open a PR, re-run the linter**: [format.md](docs/edit/format.md) | ||
|
||
That's it. If you're using VScode with Python extension, your terminal will automatically switch to the virtual env, and you will be able to use lint/format tools. | ||
To make changes, you must be able to run tests locally. Instructions for running **end-to-end** tests can be found [here](https://github.com/DataDog/system-tests/blob/main/docs/execute/README.md#run-tests) and for **parametric**, [here](https://github.com/DataDog/system-tests/blob/main/docs/scenarios/parametric.md#running-the-tests). | ||
|
||
## Propose a modification | ||
**Callout** | ||
|
||
The workflow is very simple: add your test case, commit into a branch and create a PR. We'll review it ASAP. | ||
You'll commonly need to run unmerged changes to your library against system tests (e.g. to ensure the feature is up to spec). Instructions for testing against unmerged changes can be found in [enable-test.md](./enable-test.md). | ||
|
||
Depending of how far is your test from an existing tests, it'll ask you some effort. The very first step is to add it and execute it. For instance, in a new file `tests/test_some_feature.py`: | ||
|
||
```python | ||
class Test_Feature(): | ||
def test_feature_detail(self): | ||
assert 1 + 1 == 2 | ||
``` | ||
|
||
Please note that you don't have to rebuild images at each iteration. Simply re-run `run.sh`. And you can also specify the test you want to run, don't be overflooded by logs: | ||
|
||
``` | ||
./run.sh tests/test_some_feature.py::Test_Feature::test_feature_detail | ||
``` | ||
|
||
You now want to send something on the [weblog](../edit/weblog.md), and check it. You need to use an interface validator: | ||
|
||
```python | ||
from utils import weblog, interfaces | ||
|
||
|
||
class Test_Feature(): | ||
def setup_feature_detail(self): | ||
self.r = weblog.get("/url") | ||
|
||
def test_feature_detail(self): | ||
""" tests an awesome feature """ | ||
interfaces.library.validate_spans(self.r, lamda span: span["meta"]["http.method"] == "GET") | ||
``` | ||
|
||
Sometimes [skip a test](./features.md) is needed | ||
|
||
```python | ||
from utils import weblog, interfaces, context, bug | ||
|
||
|
||
class Test_Feature(): | ||
|
||
def setup_feature_detail(self): | ||
self.r = weblog.get("/url") | ||
|
||
@bug(library="ruby", reason="APPSEC-123") | ||
def test_feature_detail(self): | ||
""" tests an awesome feature """ | ||
interfaces.library.validate_spans(self.r, lamda span: span["meta"]["http.method"] == "GET") | ||
``` | ||
|
||
You now have the basics. It proably won't be as easy, and you may needs to dive into internals, so please do not hesitate to ask for help on slack at [#apm-shared-testing](https://dd.slack.com/archives/C025TJ4RZ8X) | ||
## Index | ||
1. [lifecycle.md](./lifecycle.md): Understand how system tests work | ||
2. [add-new-test.md](./add-new-test.md): Add a new test | ||
3. [scenarios.md](./scenarios.md): Add a new scenario | ||
4. [format.md](./format.md): Use the linter | ||
5. [features.md](./features.md): Mark tests for the feature parity dashboard | ||
6. [enable-test.md](./enable-test.md): Enable a test | ||
7. [skip-tests.md](./skip-tests.md): Disable tests | ||
8. [manifest.md](./manifest.md): How tests are marked as enabled or disabled for libraries | ||
9. [features.md](./features.md): Mark tests for the feature parity dashboard | ||
10. [format.md](./format.md): Use the linter | ||
11. [troubleshooting.md](./troubleshooting.md) Tips for debugging | ||
12. [iast-validations.md](./iast-validations.md): Mark tests with vulnerabilities |
Oops, something went wrong.