Skip to content

Commit

Permalink
Merge branch 'main' into rachel.yang/baggage-tests-fix
Browse files Browse the repository at this point in the history
  • Loading branch information
rachelyangdog authored Nov 20, 2024
2 parents 0f1fe0d + d1e67f6 commit 464d1c0
Show file tree
Hide file tree
Showing 339 changed files with 16,580 additions and 262,008 deletions.
2 changes: 1 addition & 1 deletion .github/CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

/utils/build/docker/cpp/ @DataDog/dd-trace-cpp @DataDog/system-tests-core
/utils/build/docker/dotnet*/ @DataDog/apm-dotnet @DataDog/asm-dotnet @DataDog/system-tests-core
/utils/build/docker/golang*/ @DataDog/apm-go @DataDog/system-tests-core
/utils/build/docker/golang*/ @DataDog/dd-trace-go-guild @DataDog/system-tests-core
/utils/build/docker/java*/ @DataDog/apm-java @DataDog/asm-java @DataDog/system-tests-core
/utils/build/docker/java_otel/ @DataDog/opentelemetry @DataDog/system-tests-core
/utils/build/docker/nodejs*/ @DataDog/apm-js @DataDog/asm-js @DataDog/system-tests-core
Expand Down
8 changes: 0 additions & 8 deletions .github/workflows/compute-workflow-parameters.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,12 +47,6 @@ on:
parametric_scenarios:
description: ""
value: ${{ jobs.main.outputs.parametric_scenarios }}
dockerssi_scenarios:
description: ""
value: ${{ jobs.main.outputs.dockerssi_scenarios }}
dockerssi_weblogs:
description: ""
value: ${{ jobs.main.outputs.dockerssi_weblogs }}
_experimental_parametric_job_matrix:
description: ""
value: ${{ jobs.main.outputs._experimental_parametric_job_matrix }}
Expand All @@ -70,8 +64,6 @@ jobs:
opentelemetry_scenarios: ${{ steps.main.outputs.opentelemetry_scenarios }}
opentelemetry_weblogs: ${{ steps.main.outputs.opentelemetry_weblogs }}
parametric_scenarios: ${{ steps.main.outputs.parametric_scenarios }}
dockerssi_scenarios: ${{ steps.main.outputs.dockerssi_scenarios }}
dockerssi_weblogs: ${{ steps.main.outputs.dockerssi_weblogs }}
_experimental_parametric_job_matrix: ${{ steps.main.outputs._experimental_parametric_job_matrix }}
steps:
- name: Checkout
Expand Down
66 changes: 0 additions & 66 deletions .github/workflows/run-docker-ssi.yml

This file was deleted.

10 changes: 0 additions & 10 deletions .github/workflows/system-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -134,16 +134,6 @@ jobs:
weblogs: ${{ needs.compute_parameters.outputs.opentelemetry_weblogs }}
build_proxy_image: ${{ inputs.build_proxy_image }}

docker-ssi:
needs:
- compute_parameters
if: ${{ needs.compute_parameters.outputs.dockerssi_scenarios != '[]' && inputs.binaries_artifact == ''}} #Execute only for latest releases of the ssi
uses: ./.github/workflows/run-docker-ssi.yml
secrets: inherit
with:
library: ${{ inputs.library }}
weblogs: ${{ needs.compute_parameters.outputs.dockerssi_weblogs }}

external-processing:
needs:
- compute_parameters
Expand Down
49 changes: 40 additions & 9 deletions .gitlab-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ include:
- remote: https://gitlab-templates.ddbuild.io/libdatadog/include/single-step-instrumentation-tests.yml

stages:
- child_pipelines
- ruby_tracer
- nodejs_tracer
- java_tracer
Expand Down Expand Up @@ -45,7 +46,7 @@ onboarding_nodejs:
extends: .base_job_onboarding_system_tests
stage: nodejs_tracer
allow_failure: true
dependencies: []
needs: []
rules:
- if: $CI_PIPELINE_SOURCE == "schedule" && ($ONLY_TEST_LIBRARY == "" || $ONLY_TEST_LIBRARY == "nodejs")
when: always
Expand Down Expand Up @@ -84,7 +85,7 @@ onboarding_java:
extends: .base_job_onboarding_system_tests
stage: java_tracer
allow_failure: true
dependencies: []
needs: []
rules:
- if: $CI_PIPELINE_SOURCE == "schedule" && ($ONLY_TEST_LIBRARY == "" || $ONLY_TEST_LIBRARY == "java")
when: always
Expand Down Expand Up @@ -122,7 +123,7 @@ onboarding_python:
extends: .base_job_onboarding_system_tests
stage: python_tracer
allow_failure: true
dependencies: []
needs: []
rules:
- if: $CI_PIPELINE_SOURCE == "schedule" && ($ONLY_TEST_LIBRARY == "" || $ONLY_TEST_LIBRARY == "python")
when: always
Expand Down Expand Up @@ -150,6 +151,9 @@ onboarding_python:
- ONBOARDING_FILTER_WEBLOG: [test-app-python-multicontainer,test-app-python-multialpine]
SCENARIO: [SIMPLE_INSTALLER_AUTO_INJECTION]
DEFAULT_VMS: ["True", "False"]
- ONBOARDING_FILTER_WEBLOG: [test-app-python-unsupported-defaults,test-app-python-27]
SCENARIO: [INSTALLER_NOT_SUPPORTED_AUTO_INJECTION]
DEFAULT_VMS: ["True", "False"]
script:
- ./build.sh -i runner
- timeout 2700s ./run.sh $SCENARIO --vm-weblog ${ONBOARDING_FILTER_WEBLOG} --vm-env ${ONBOARDING_FILTER_ENV} --vm-library ${TEST_LIBRARY} --vm-provider aws --report-run-url ${CI_PIPELINE_URL} --report-environment ${ONBOARDING_FILTER_ENV} --vm-default-vms ${DEFAULT_VMS}
Expand All @@ -158,7 +162,7 @@ onboarding_dotnet:
extends: .base_job_onboarding_system_tests
stage: dotnet_tracer
allow_failure: true
dependencies: []
needs: []
rules:
- if: $CI_PIPELINE_SOURCE == "schedule" && ($ONLY_TEST_LIBRARY == "" || $ONLY_TEST_LIBRARY == "dotnet")
when: always
Expand Down Expand Up @@ -194,7 +198,7 @@ onboarding_ruby:
extends: .base_job_onboarding_system_tests
stage: ruby_tracer
allow_failure: true
dependencies: []
needs: []
rules:
- if: $CI_PIPELINE_SOURCE == "schedule" && ($ONLY_TEST_LIBRARY == "" || $ONLY_TEST_LIBRARY == "ruby")
when: always
Expand All @@ -215,9 +219,9 @@ onboarding_ruby:
- ONBOARDING_FILTER_WEBLOG: [test-app-ruby]
SCENARIO: [CHAOS_INSTALLER_AUTO_INJECTION]
DEFAULT_VMS: ["True", "False"]
#- ONBOARDING_FILTER_WEBLOG: [test-app-ruby-multicontainer,test-app-ruby-multialpine]
# SCENARIO: [SIMPLE_INSTALLER_AUTO_INJECTION]
# DEFAULT_VMS: ["True", "False"]
- ONBOARDING_FILTER_WEBLOG: [test-app-ruby-multicontainer]
SCENARIO: [SIMPLE_INSTALLER_AUTO_INJECTION]
DEFAULT_VMS: ["True", "False"]
script:
- ./build.sh -i runner
- timeout 2700s ./run.sh $SCENARIO --vm-weblog ${ONBOARDING_FILTER_WEBLOG} --vm-env ${ONBOARDING_FILTER_ENV} --vm-library ${TEST_LIBRARY} --vm-provider aws --report-run-url ${CI_PIPELINE_URL} --report-environment ${ONBOARDING_FILTER_ENV} --vm-default-vms ${DEFAULT_VMS}
Expand All @@ -226,7 +230,7 @@ onboarding_php:
extends: .base_job_onboarding_system_tests
stage: php_tracer
allow_failure: true
dependencies: []
needs: []
rules:
- if: $CI_PIPELINE_SOURCE == "schedule" && ($ONLY_TEST_LIBRARY == "" || $ONLY_TEST_LIBRARY == "php")
when: always
Expand Down Expand Up @@ -342,3 +346,30 @@ generate_system_tests_images:
- ./utils/build/build_python_base_images.sh --push
- ./lib-injection/build/build_lib_injection_images.sh
when: manual

generate_docker_ssi_pipeline:
image: 486234852809.dkr.ecr.us-east-1.amazonaws.com/ci/test-infra-definitions/runner:a58cc31c
stage: child_pipelines
tags: ["arch:amd64"]
needs: []
script:
- python utils/docker_ssi/docker_ssi_matrix_builder.py --format yaml --output-file ssi_pipeline.yml
artifacts:
paths:
- ssi_pipeline.yml

docker_ssi_pipeline:
stage: child_pipelines
needs: ["generate_docker_ssi_pipeline"]
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
when: always
- when: manual
allow_failure: true
variables:
PARENT_PIPELINE_SOURCE: $CI_PIPELINE_SOURCE
trigger:
include:
- artifact: ssi_pipeline.yml
job: generate_docker_ssi_pipeline
strategy: depend
1 change: 1 addition & 0 deletions .shellcheck
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ TODO=(
utils/build/docker/golang/install_ddtrace.sh
utils/build/docker/java/app.sh
utils/build/docker/java/install_ddtrace.sh
utils/build/docker/java/install_drop_in.sh
utils/build/docker/java/spring-boot/app.sh
utils/build/docker/java_otel/spring-boot/app.sh
utils/build/docker/php/apache-mod/build.sh
Expand Down
63 changes: 50 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,58 @@
## System tests
## What is system-tests?

Workbench designed to run advanced tests (integration, smoke, functional, fuzzing and performance)
A workbench designed to run advanced tests (integration, smoke, functional, fuzzing and performance) against our suite of dd-trace libraries.

## Requirements

`bash`, `docker` and `python3.12`. More infos in the [documentation](https://github.com/DataDog/system-tests/blob/main/docs/execute/requirements.md)
`bash`, `docker` and `python3.12`.

## How to use
We recommend to install python3.12 via pyenv: [pyenv](https://github.com/pyenv/pyenv#getting-pyenv). Pyenv is a tool for managing multiple python versions and keeping system tests dependencies isolated to their virtual environment. If you don't wish to install pyenv, instructions for downloading python 3.12 on your machine can be found below:

#### Ubuntu

```
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.12 python3.12-distutils python3.12-venv python3.12-dev
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python3.12 get-pip.py
./build.sh -i runner
```

#### Windows

TODO

#### Mac

For Homebrew users :

```
brew install [email protected]
pip3.12 install virtualenv
```

## Getting started

### Run a test

Run a test according to the [run documentation](docs/execute/run.md); note that if you're running an [end to end test](docs/scenarios/README.md#end-to-end-scenarios), you will need to build the test infrastructure according to the [build documentation](docs/execute/build.md) before you can run the test.

Tests will only run if they are not disabled; see how tests are disabled in [skip-tests.md](docs/edit/skip-tests.md) and how tests are enabled in [enable-test.md](docs/edit/enable-test.md). Alternatively, you can force a disabled test to execute according to the [force-execute documentation](docs/execute/force-execute.md).

![Output on success](./utils/assets/output.png?raw=true)

### Edit a test

Refer to the [edit docs](docs/edit/README.md).

### Understand the tests

**[Complete documentation](https://github.com/DataDog/system-tests/blob/main/docs)**

System-tests supports various scenarios for running tests; read more about the different kinds of tests that this repo covers in [scenarios/README.md](scenarios/README.md).

Understand the test architecture at the [architectural overview](https://github.com/DataDog/system-tests/blob/main/docs/architecture/overview.md).

```mermaid
flowchart TD
Expand All @@ -24,12 +70,3 @@ flowchart TD
OUTPUT[Test output in bash]
LOGS[Logs directory per scenario]
```

Understand the parts of the tests at the [architectural overview](https://github.com/DataDog/system-tests/blob/main/docs/architecture/overview.md).

More details in [build documentation](https://github.com/DataDog/system-tests/blob/main/docs/execute/build.md) and [run documentation](https://github.com/DataDog/system-tests/blob/main/docs/execute/run.md).

![Output on success](./utils/assets/output.png?raw=true)

**[Complete documentation](https://github.com/DataDog/system-tests/blob/main/docs)**

77 changes: 21 additions & 56 deletions docs/edit/README.md
Original file line number Diff line number Diff line change
@@ -1,62 +1,27 @@
## Run the test loccally
System tests allow developers define scenarios and ensure datadog libraries produce consistent telemetry (that is, traces, metrics, profiles, etc...). This "edit" section addresses the following use-cases:

Please have a look on the [weblog](../execute/)
1. Adding a new test (maybe to support a new or existing feature)
2. Modifying an existing test, whether that's modifying the test client (test*.py files) or the weblog and/or parametric apps that serve the test client requests)
3. Enabling/disabling tests for libraries under various conditions

```bash
./build.sh python # or any another library. This step can be ran only once, as long as you do not need a modification on the lib/agent
./run.sh
```
**Note: Anytime you make changes and open a PR, re-run the linter**: [format.md](docs/edit/format.md)

That's it. If you're using VScode with Python extension, your terminal will automatically switch to the virtual env, and you will be able to use lint/format tools.
To make changes, you must be able to run tests locally. Instructions for running **end-to-end** tests can be found [here](https://github.com/DataDog/system-tests/blob/main/docs/execute/README.md#run-tests) and for **parametric**, [here](https://github.com/DataDog/system-tests/blob/main/docs/scenarios/parametric.md#running-the-tests).

## Propose a modification
**Callout**

The workflow is very simple: add your test case, commit into a branch and create a PR. We'll review it ASAP.
You'll commonly need to run unmerged changes to your library against system tests (e.g. to ensure the feature is up to spec). Instructions for testing against unmerged changes can be found in [enable-test.md](./enable-test.md).

Depending of how far is your test from an existing tests, it'll ask you some effort. The very first step is to add it and execute it. For instance, in a new file `tests/test_some_feature.py`:

```python
class Test_Feature():
def test_feature_detail(self):
assert 1 + 1 == 2
```

Please note that you don't have to rebuild images at each iteration. Simply re-run `run.sh`. And you can also specify the test you want to run, don't be overflooded by logs:

```
./run.sh tests/test_some_feature.py::Test_Feature::test_feature_detail
```

You now want to send something on the [weblog](../edit/weblog.md), and check it. You need to use an interface validator:

```python
from utils import weblog, interfaces


class Test_Feature():
def setup_feature_detail(self):
self.r = weblog.get("/url")

def test_feature_detail(self):
""" tests an awesome feature """
interfaces.library.validate_spans(self.r, lamda span: span["meta"]["http.method"] == "GET")
```

Sometimes [skip a test](./features.md) is needed

```python
from utils import weblog, interfaces, context, bug


class Test_Feature():

def setup_feature_detail(self):
self.r = weblog.get("/url")

@bug(library="ruby", reason="APPSEC-123")
def test_feature_detail(self):
""" tests an awesome feature """
interfaces.library.validate_spans(self.r, lamda span: span["meta"]["http.method"] == "GET")
```

You now have the basics. It proably won't be as easy, and you may needs to dive into internals, so please do not hesitate to ask for help on slack at [#apm-shared-testing](https://dd.slack.com/archives/C025TJ4RZ8X)
## Index
1. [lifecycle.md](./lifecycle.md): Understand how system tests work
2. [add-new-test.md](./add-new-test.md): Add a new test
3. [scenarios.md](./scenarios.md): Add a new scenario
4. [format.md](./format.md): Use the linter
5. [features.md](./features.md): Mark tests for the feature parity dashboard
6. [enable-test.md](./enable-test.md): Enable a test
7. [skip-tests.md](./skip-tests.md): Disable tests
8. [manifest.md](./manifest.md): How tests are marked as enabled or disabled for libraries
9. [features.md](./features.md): Mark tests for the feature parity dashboard
10. [format.md](./format.md): Use the linter
11. [troubleshooting.md](./troubleshooting.md) Tips for debugging
12. [iast-validations.md](./iast-validations.md): Mark tests with vulnerabilities
Loading

0 comments on commit 464d1c0

Please sign in to comment.