Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] timeout tests #8

Draft
wants to merge 32 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
ebd32cc
[uss_qualifier/netrid/dss_interoperability] Add check for DSS0210 req…
mickmis Aug 15, 2023
983ecc5
Merge remote-tracking branch 'interuss/main' into dss0210
mickmis Aug 15, 2023
448c1d2
Merge remote-tracking branch 'interuss/main' into dss0210
mickmis Aug 17, 2023
3fcaaaa
debug attempt for CI
mickmis Aug 17, 2023
2746872
another debug attempt CI
mickmis Aug 17, 2023
b59240f
more data loggued
mickmis Aug 17, 2023
05382bc
fix
mickmis Aug 17, 2023
7f33a22
fix
mickmis Aug 17, 2023
a39c520
fix
mickmis Aug 17, 2023
5f09a29
add data
mickmis Aug 17, 2023
eb6fe9f
try prev. version
mickmis Aug 17, 2023
7aa074a
update a limit
mickmis Aug 17, 2023
1be5741
attempt ulimits
mickmis Aug 17, 2023
7b27ee7
other
mickmis Aug 17, 2023
8129f5e
limit
mickmis Aug 17, 2023
ed2bc95
value
mickmis Aug 17, 2023
cf3b516
add docker logs
mickmis Aug 17, 2023
7c901f6
otherlimit
mickmis Aug 17, 2023
b740109
fix
mickmis Aug 17, 2023
0cf3b63
logs
mickmis Aug 17, 2023
6d5e3b6
Merge remote-tracking branch 'interuss/main' into timeouttests
mickmis Oct 10, 2023
623ba8c
format
mickmis Oct 10, 2023
a65ad9e
try installing modules
mickmis Oct 13, 2023
ab05552
disable ipv6
mickmis Oct 13, 2023
aca5af7
merge run configs
mickmis Oct 13, 2023
a03f2a0
timeout++
mickmis Oct 13, 2023
0cda9c8
increase gunicorn log level
mickmis Oct 13, 2023
f9b1b20
format
mickmis Oct 13, 2023
72cb679
increase threads
mickmis Oct 13, 2023
96b45a6
better gunicorn?
mickmis Oct 13, 2023
94a4f94
try log multiprocessing
mickmis Oct 16, 2023
cfff348
try out gevent
mickmis Oct 16, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 27 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,33 @@ jobs:
with:
name: uss_qualifier
script: |
sudo apt install linux-modules-extra-$(uname -r)
sudo cat /proc/sys/net/netfilter/nf_conntrack_max
sudo cat /proc/sys/net/core/somaxconn
sudo cat /proc/sys/net/ipv4/tcp_max_syn_backlog

sudo sysctl net.ipv6.conf.all.disable_ipv6=1
sudo sysctl net.ipv6.conf.default.disable_ipv6=1
sudo sysctl net.ipv6.conf.lo.disable_ipv6=1

sudo sysctl net.ipv4.ip_local_port_range
sudo sysctl net.ipv4.tcp_fin_timeout
sudo sysctl net.ipv4.tcp_max_syn_backlog=16384
sudo cat /proc/sys/net/ipv4/tcp_max_syn_backlog
sudo sysctl -w net.ipv4.neigh.default.gc_thresh3=4096
sudo sysctl fs.inotify.max_user_instances=1048576
sudo prlimit --pid $$ --nofile=1048576:1048576
sudo sysctl fs.inotify.max_user_instances=1280
sudo sysctl fs.inotify.max_user_watches=655360
sudo sysctl net.core.netdev_max_backlog=65536

sudo sysctl -p

sudo tcpdump -nn -i any -w /tmp/sntp.cap &
sudo sh -c 'while true; do ss -s ; ss -ti ; sysctl fs.file-nr ; netstat -s ; date ; sleep 10; done' &
sudo sh -c 'while true; do lsof -n | wc -l ; date ; sleep 30; done' &
sleep 1

export CONFIG_NAME="" \
USS_QUALIFIER_STOP_FAST=true

Expand Down
20 changes: 20 additions & 0 deletions .github/workflows/monitoring-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ on:

jobs:
monitoring-test:
# runs-on: ubuntu-20.04
runs-on: ubuntu-latest
name: ${{ inputs.name }} test
steps:
Expand Down Expand Up @@ -47,3 +48,22 @@ jobs:
path: |
monitoring/uss_qualifier/output
monitoring/prober/output
# todo: remove me
- name: Prepare capture
if: always()
run: |
if [ -f /tmp/sntp.cap ]; then
sleep 1
sudo kill -2 $(pgrep tcpdump)
journalctl -x > /tmp/alllogs
sleep 1
fi
# todo: docker logs
- name: Upload capture
if: always()
uses: actions/upload-artifact@v3
with:
name: capture-${{ inputs.name }}
path: |
/tmp/sntp.cap
/tmp/alllogs
8 changes: 8 additions & 0 deletions build/dev/docker-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,10 @@ services:
restart: always
networks:
- dss_internal_network
ulimits:
nofile:
soft: 10000
hard: 10000

rid_bootstrapper:
image: interuss/dss:v0.8.0-rc2
Expand Down Expand Up @@ -61,6 +65,10 @@ services:
interop_ecosystem_network:
aliases:
- dss.uss2.localutm
ulimits:
nofile:
soft: 10000
hard: 10000

oauth:
hostname: oauth.authority.localutm
Expand Down
5 changes: 5 additions & 0 deletions monitoring/mock_uss/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
import inspect
import logging
import multiprocessing
import os
from typing import Any, Optional, Callable
from loguru import logger
Expand All @@ -17,6 +19,9 @@
webapp = MockUSS(__name__)
enabled_services = set()

mp_logger = multiprocessing.log_to_stderr()
mp_logger.setLevel(logging.DEBUG)


def import_environment_variable(
var_name: str,
Expand Down
17 changes: 17 additions & 0 deletions monitoring/mock_uss/gunicorn.conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,23 @@

from monitoring.mock_uss import webapp

loglevel = "debug"


workers = 2


threads = 4


worker_tmp_dir = "/dev/shm"


worker_class = "gevent"


preload_app = True


def on_starting(server: Arbiter):
"""gunicorn server hook called just before master process is initialized."""
Expand Down
1 change: 1 addition & 0 deletions monitoring/mock_uss/run_locally_test_geoawareness.sh
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ docker container rm -f ${container_name} || echo "${container_name} container wa

# shellcheck disable=SC2086
docker run ${docker_args} --rm --name ${container_name} \
--ulimit nofile=10000 \
-e MOCK_USS_PUBLIC_KEY="${PUBLIC_KEY}" \
-e MOCK_USS_TOKEN_AUDIENCE="${AUD}" \
-e MOCK_USS_SERVICES="geoawareness" \
Expand Down
2 changes: 1 addition & 1 deletion monitoring/monitorlib/fetch/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
from monitoring.monitorlib import infrastructure
from monitoring.monitorlib.rid import RIDVersion

TIMEOUTS = (5, 5) # Timeouts of `connect` and `read` in seconds
TIMEOUTS = (5, 25) # Timeouts of `connect` and `read` in seconds
ATTEMPTS = (
2 # Number of attempts to query when experiencing a retryable error like a timeout
)
Expand Down
78 changes: 61 additions & 17 deletions monitoring/uss_qualifier/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
import json
import os
import sys
from typing import Optional

from implicitdict import ImplicitDict
from loguru import logger
Expand Down Expand Up @@ -110,12 +111,16 @@ def execute_test_run(
)


def main() -> int:
args = parseArgs()

config_src = load_dict_with_references(args.config)
def run_config(
config_name: str,
config_output: str,
report_path: str,
skip_validation: bool,
exit_before_execution: bool,
):
config_src = load_dict_with_references(config_name)

if not args.skip_validation:
if not skip_validation:
logger.info("Validating configuration...")
validation_errors = validate_config(config_src)
if validation_errors:
Expand All @@ -127,33 +132,33 @@ def main() -> int:

whole_config = ImplicitDict.parse(config_src, USSQualifierConfiguration)

if args.config_output:
logger.info("Writing flattened configuration to {}", args.config_output)
if args.config_output.lower().endswith(".json"):
with open(args.config_output, "w") as f:
if config_output:
logger.info("Writing flattened configuration to {}", config_output)
if config_output.lower().endswith(".json"):
with open(config_output, "w") as f:
json.dump(whole_config, f, indent=2, sort_keys=True)
elif args.config_output.lower().endswith(".yaml"):
with open(args.config_output, "w") as f:
elif config_output.lower().endswith(".yaml"):
with open(config_output, "w") as f:
yaml.dump(json.loads(json.dumps(whole_config)), f, sort_keys=True)
else:
raise ValueError(
"Unsupported extension for --config-output; only .json or .yaml file paths may be specified"
)

if args.exit_before_execution:
if exit_before_execution:
logger.info("Exiting because --exit-before-execution specified.")
return os.EX_OK
return

config = whole_config.v1
if args.report:
if report_path:
if not config.artifacts:
config.artifacts = ArtifactsConfiguration(
ReportConfiguration(report_path=args.report)
ReportConfiguration(report_path=report_path)
)
elif not config.artifacts.report:
config.artifacts.report = ReportConfiguration(report_path=args.report)
config.artifacts.report = ReportConfiguration(report_path=report_path)
else:
config.artifacts.report.report_path = args.report
config.artifacts.report.report_path = report_path

do_not_save_report = False
if config.test_run:
Expand Down Expand Up @@ -204,6 +209,45 @@ def main() -> int:
logger.info(f"Writing tested requirements view to {path}")
generate_tested_requirements(report, config.artifacts.tested_requirements)


def main() -> int:
args = parseArgs()

config_names = str(args.config).split(",")

if args.config_output:
config_outputs = str(args.config_output).split(",")
if len(config_outputs) != len(config_names):
raise ValueError(
f"Need matching number of config_output, expected {len(config_names)}, got {len(config_outputs)}"
)
else:
config_outputs = ["" for _ in config_names]

if args.report:
report_paths = str(args.report).split(",")
if len(report_paths) != len(config_names):
raise ValueError(
f"Need matching number of report, expected {len(config_names)}, got {len(report_paths)}"
)
else:
report_paths = ["" for _ in config_names]

for idx, config_name in enumerate(config_names):
logger.info(
f"========== Running uss_qualifier for configuration {config_name} =========="
)
run_config(
config_name,
config_outputs[idx],
report_paths[idx],
args.skip_validation,
args.exit_before_execution,
)
logger.info(
f"========== Completed uss_qualifier for configuration {config_name} =========="
)

return os.EX_OK


Expand Down
116 changes: 56 additions & 60 deletions monitoring/uss_qualifier/run_locally.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,71 +24,67 @@ CONFIG_NAME="${1:-ALL}"
OTHER_ARGS=${@:2}

if [ "$CONFIG_NAME" == "ALL" ]; then
declare -a all_configurations=( \
"configurations.dev.noop" \
"configurations.dev.dss_probing" \
"configurations.dev.geoawareness_cis" \
"configurations.dev.generate_rid_test_data" \
"configurations.dev.geospatial_comprehension" \
"configurations.dev.general_flight_auth" \
"configurations.dev.f3548" \
"configurations.dev.f3548_self_contained" \
"configurations.dev.netrid_v22a" \
"configurations.dev.uspace" \
)
# TODO: Add configurations.dev.netrid_v19
echo "Running configurations: ${all_configurations[*]}"
for configuration_name in "${all_configurations[@]}"; do
monitoring/uss_qualifier/run_locally.sh "$configuration_name"
done
else
CONFIG_FLAG="--config ${CONFIG_NAME}"
CONFIG_NAME="\
configurations.dev.noop,\
configurations.dev.dss_probing,\
configurations.dev.geoawareness_cis,\
configurations.dev.generate_rid_test_data,\
configurations.dev.geospatial_comprehension,\
configurations.dev.general_flight_auth,\
configurations.dev.f3548,\
configurations.dev.f3548_self_contained,\
configurations.dev.netrid_v22a,\
configurations.dev.uspace"
fi
# TODO: Add configurations.dev.netrid_v19

AUTH_SPEC='DummyOAuth(http://oauth.authority.localutm:8085/token,uss_qualifier)'
echo "Running configuration(s): ${CONFIG_NAME}"

QUALIFIER_OPTIONS="$CONFIG_FLAG $OTHER_ARGS"
CONFIG_FLAG="--config ${CONFIG_NAME}"

OUTPUT_DIR="monitoring/uss_qualifier/output"
mkdir -p "$OUTPUT_DIR"
AUTH_SPEC='DummyOAuth(http://oauth.authority.localutm:8085/token,uss_qualifier)'

CACHE_DIR="monitoring/uss_qualifier/.templates_cache"
mkdir -p "$CACHE_DIR"
QUALIFIER_OPTIONS="$CONFIG_FLAG $OTHER_ARGS"

if [ "$CI" == "true" ]; then
docker_args="--add-host host.docker.internal:host-gateway" # Required to reach other containers in Ubuntu (used for Github Actions)
else
docker_args="-it"
fi
OUTPUT_DIR="monitoring/uss_qualifier/output"
mkdir -p "$OUTPUT_DIR"

CACHE_DIR="monitoring/uss_qualifier/.templates_cache"
mkdir -p "$CACHE_DIR"

start_time=$(date +%Y-%m-%dT%H:%M:%S)
echo "========== Running uss_qualifier for configuration ${CONFIG_NAME} =========="
# shellcheck disable=SC2086
docker run ${docker_args} --name uss_qualifier \
--rm \
--network interop_ecosystem_network \
-u "$(id -u):$(id -g)" \
-e PYTHONBUFFERED=1 \
-e AUTH_SPEC=${AUTH_SPEC} \
-e USS_QUALIFIER_STOP_FAST=${USS_QUALIFIER_STOP_FAST:-} \
-e MONITORING_GITHUB_ROOT=${MONITORING_GITHUB_ROOT:-} \
-v "$(pwd)/$OUTPUT_DIR:/app/$OUTPUT_DIR" \
-v "$(pwd)/$CACHE_DIR:/app/$CACHE_DIR" \
-w /app/monitoring/uss_qualifier \
interuss/monitoring \
python main.py $QUALIFIER_OPTIONS
echo "========== Completed uss_qualifier for configuration ${CONFIG_NAME} =========="

# Set return code according to whether the test run was fully successful
reports_generated=$(find ./monitoring/uss_qualifier/output/report*.json -newermt "$start_time")
# shellcheck disable=SC2068
for REPORT in ${reports_generated[@]}; do
successful=$(python build/dev/extract_json_field.py report.*.successful "$REPORT")
if echo "${successful}" | grep -iqF true; then
echo "Full success indicated by $REPORT"
else
echo "Could not establish that all uss_qualifier tests passed in $REPORT"
exit 1
fi
done
if [ "$CI" == "true" ]; then
docker_args="--add-host host.docker.internal:host-gateway" # Required to reach other containers in Ubuntu (used for Github Actions)
else
docker_args="-it"
fi

start_time=$(date +%Y-%m-%dT%H:%M:%S)
# shellcheck disable=SC2086
docker run ${docker_args} --name uss_qualifier \
--ulimit nofile=10000 \
--rm \
--network interop_ecosystem_network \
-u "$(id -u):$(id -g)" \
-e PYTHONBUFFERED=1 \
-e AUTH_SPEC=${AUTH_SPEC} \
-e USS_QUALIFIER_STOP_FAST=${USS_QUALIFIER_STOP_FAST:-} \
-e MONITORING_GITHUB_ROOT=${MONITORING_GITHUB_ROOT:-} \
-v "$(pwd)/$OUTPUT_DIR:/app/$OUTPUT_DIR" \
-v "$(pwd)/$CACHE_DIR:/app/$CACHE_DIR" \
-w /app/monitoring/uss_qualifier \
interuss/monitoring \
python main.py $QUALIFIER_OPTIONS

# Set return code according to whether the test run was fully successful
reports_generated=$(find ./monitoring/uss_qualifier/output/report*.json -newermt "$start_time")
# shellcheck disable=SC2068
for REPORT in ${reports_generated[@]}; do
successful=$(python build/dev/extract_json_field.py report.*.successful "$REPORT")
if echo "${successful}" | grep -iqF true; then
echo "Full success indicated by $REPORT"
else
echo "Could not establish that all uss_qualifier tests passed in $REPORT"
exit 1
fi
done

Loading
Loading