Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test windows #18878

Closed
wants to merge 29 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
a5c995f
commit
nenadnoveljic Oct 17, 2024
28110e0
linter
nenadnoveljic Oct 17, 2024
da4cbfc
get compose logs
nenadnoveljic Oct 17, 2024
fa0ff4a
linter
nenadnoveljic Oct 17, 2024
3a7249c
linter
nenadnoveljic Oct 17, 2024
f597c7a
import SubprocessError
nenadnoveljic Oct 17, 2024
185d89a
fix SubprocessError
nenadnoveljic Oct 17, 2024
06236da
remove -f
nenadnoveljic Oct 17, 2024
ced15ff
capture output
nenadnoveljic Oct 17, 2024
7bb988b
linter
nenadnoveljic Oct 17, 2024
c6bfffc
docker compose output
nenadnoveljic Oct 17, 2024
4ebee2c
remove import subprocesserror
nenadnoveljic Oct 17, 2024
00ecfb4
removed capture
nenadnoveljic Oct 17, 2024
92f634f
force ci
nenadnoveljic Oct 17, 2024
9c06709
capture
nenadnoveljic Oct 18, 2024
51d9dac
propagate capture
nenadnoveljic Oct 19, 2024
948111f
propagate capture
nenadnoveljic Oct 19, 2024
ff23ae1
self.capture
nenadnoveljic Oct 19, 2024
6ab3b42
linter
nenadnoveljic Oct 19, 2024
4a3c529
test case for docker_run capture=True
nenadnoveljic Oct 19, 2024
d5ef46a
restored README.md
nenadnoveljic Oct 20, 2024
4ce344e
[mongo] add mongo recommended cluster monitors (#18858)
lu-zhengda Oct 17, 2024
32cf609
* Fix test results for .NET Core. (#18802)
nubtron Oct 18, 2024
3567d3a
[Release] Bumped postgres version to 22.0.2 and tibco_ems version to …
HadhemiDD Oct 18, 2024
b31c289
[NDM] [Cisco ACI] Utilize raw ID for interface metadata (#18842)
zoedt Oct 18, 2024
f05753e
[postgres] Fix UnboundLocalError caused by referencing local variable…
lu-zhengda Oct 18, 2024
f929f0a
[CONTP-382] Add Validation Webhook telemetry metrics (#18867)
gabedos Oct 18, 2024
53a751a
test windows
lu-zhengda Oct 20, 2024
f8b0ec6
trigger test
lu-zhengda Oct 20, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 11 additions & 4 deletions .github/workflows/test-target.yml
Original file line number Diff line number Diff line change
Expand Up @@ -102,18 +102,25 @@ jobs:
DDEV_E2E_AGENT_PY2: "${{ inputs.platform == 'windows' && (inputs.agent-image-windows-py2 || 'datadog/agent-dev:master-py2-win-servercore') || inputs.agent-image-py2 }}"
# Test results for later processing
TEST_RESULTS_BASE_DIR: "test-results"
TEST_RESULTS_DIR: "test-results/${{ inputs.job-name }}"
# Tracing to monitor our test suite
DD_ENV: "ci"
DD_SERVICE: "ddev-integrations-${{ inputs.repo }}"
DD_TAGS: "team:agent-integrations,platform:${{ inputs.platform }},integration:${{ inputs.target }}"
DD_TRACE_ANALYTICS_ENABLED: "true"
# Capture traces for a separate job to do the submission
TRACE_CAPTURE_BASE_DIR: ".trace-captures"
TRACE_CAPTURE_FILE: ".trace-captures/${{ inputs.job-name }}"
TRACE_CAPTURE_LOG: ".trace-captures/output.log"
TRACE_CAPTURE_BASE_DIR: "trace-captures"
TRACE_CAPTURE_LOG: "trace-captures/output.log"

steps:

- name: Set environment variables with sanitized paths
run: |
# We want to replace leading dots as they will make directories hidden, which will cause them to be ignored by upload-artifact and EnricoMi/publish-unit-test-result-action
JOB_NAME=$(echo "${{ inputs.job-name }}" | sed 's/^\./Dot/')

echo "TEST_RESULTS_DIR=$TEST_RESULTS_BASE_DIR/$JOB_NAME" >> $GITHUB_ENV
echo "TRACE_CAPTURE_FILE=$TRACE_CAPTURE_BASE_DIR/$JOB_NAME" >> $GITHUB_ENV

- name: Set up Windows
if: runner.os == 'Windows'
run: |-
Expand Down
1 change: 1 addition & 0 deletions cisco_aci/changelog.d/18842.added
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
[NDM] [Cisco ACI] Utilize raw ID for interface metadata
3 changes: 3 additions & 0 deletions cisco_aci/datadog_checks/cisco_aci/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -135,8 +135,11 @@ class Status(StrEnum):
class InterfaceMetadata(BaseModel):
device_id: Optional[str] = Field(default=None)
id_tags: list = Field(default_factory=list)
raw_id: Optional[str] = Field(default=None)
raw_id_type: Optional[str] = Field(default='cisco_aci')
index: Optional[int] = Field(default=None)
name: Optional[str] = Field(default=None)
alias: Optional[str] = Field(default=None)
description: Optional[str] = Field(default=None)
mac_address: Optional[str] = Field(default=None)
admin_status: Optional[AdminStatus] = Field(default=None)
Expand Down
2 changes: 2 additions & 0 deletions cisco_aci/datadog_checks/cisco_aci/ndm.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,9 +48,11 @@ def create_interface_metadata(phys_if, address, namespace):
eth = PhysIf(**phys_if.get('l1PhysIf', {}))
interface = InterfaceMetadata(
device_id='{}:{}'.format(namespace, address),
raw_id=eth.attributes.id,
id_tags=['interface:{}'.format(eth.attributes.name)],
index=eth.attributes.id,
name=eth.attributes.name,
alias=eth.attributes.id,
description=eth.attributes.desc,
mac_address=eth.attributes.router_mac,
admin_status=eth.attributes.admin_st,
Expand Down
18 changes: 18 additions & 0 deletions cisco_aci/tests/fixtures/metadata.py
Original file line number Diff line number Diff line change
Expand Up @@ -137,6 +137,8 @@
INTERFACE_METADATA = [
{
'admin_status': 1,
'alias': 'eth1/1',
'raw_id': 'eth1/1',
'device_id': 'default:10.0.200.0',
'id_tags': [
'interface:eth1/1',
Expand All @@ -150,6 +152,8 @@
},
{
'admin_status': 1,
'alias': 'eth1/2',
'raw_id': 'eth1/2',
'device_id': 'default:10.0.200.0',
'id_tags': [
'interface:eth1/2',
Expand All @@ -163,6 +167,8 @@
},
{
'admin_status': 1,
'alias': 'eth1/3',
'raw_id': 'eth1/3',
'device_id': 'default:10.0.200.0',
'id_tags': [
'interface:eth1/3',
Expand All @@ -176,6 +182,8 @@
},
{
'admin_status': 1,
'alias': 'eth1/1',
'raw_id': 'eth1/1',
'device_id': 'default:10.0.200.1',
'id_tags': [
'interface:eth1/1',
Expand All @@ -189,6 +197,8 @@
},
{
'admin_status': 1,
'alias': 'eth1/2',
'raw_id': 'eth1/2',
'device_id': 'default:10.0.200.1',
'id_tags': [
'interface:eth1/2',
Expand All @@ -202,6 +212,8 @@
},
{
'admin_status': 1,
'alias': 'eth1/3',
'raw_id': 'eth1/3',
'device_id': 'default:10.0.200.1',
'id_tags': [
'interface:eth1/3',
Expand All @@ -215,6 +227,8 @@
},
{
'admin_status': 1,
'alias': 'eth5/1',
'raw_id': 'eth5/1',
'device_id': 'default:10.0.200.5',
'id_tags': [
'interface:eth5/1',
Expand All @@ -228,6 +242,8 @@
},
{
'admin_status': 1,
'alias': 'eth5/2',
'raw_id': 'eth5/2',
'device_id': 'default:10.0.200.5',
'id_tags': [
'interface:eth5/2',
Expand All @@ -241,6 +257,8 @@
},
{
'admin_status': 1,
'alias': 'eth7/1',
'raw_id': 'eth7/1',
'device_id': 'default:10.0.200.5',
'id_tags': [
'interface:eth7/1',
Expand Down
14 changes: 11 additions & 3 deletions datadog_checks_dev/datadog_checks/dev/docker.py
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,7 @@ def docker_run(
wrappers=None,
attempts=None,
attempts_wait=1,
capture=None,
):
"""
A convenient context manager for safely setting up and tearing down Docker environments.
Expand Down Expand Up @@ -169,7 +170,10 @@ def docker_run(
if not isinstance(compose_file, str):
raise TypeError('The path to the compose file is not a string: {}'.format(repr(compose_file)))

set_up = ComposeFileUp(compose_file, build=build, service_name=service_name)
composeFileArgs = {'compose_file': compose_file, 'build': build, 'service_name': service_name}
if capture is not None:
composeFileArgs['capture'] = capture
set_up = ComposeFileUp(**composeFileArgs)
if down is not None:
tear_down = down
else:
Expand Down Expand Up @@ -229,10 +233,11 @@ def docker_run(


class ComposeFileUp(LazyFunction):
def __init__(self, compose_file, build=False, service_name=None):
def __init__(self, compose_file, build=False, service_name=None, capture=None):
self.compose_file = compose_file
self.build = build
self.service_name = service_name
self.capture = capture
self.command = ['docker', 'compose', '-f', self.compose_file, 'up', '-d', '--force-recreate']

if self.build:
Expand All @@ -242,7 +247,10 @@ def __init__(self, compose_file, build=False, service_name=None):
self.command.append(self.service_name)

def __call__(self):
return run_command(self.command, check=True)
args = {'check': True}
if self.capture is not None:
args['capture'] = self.capture
return run_command(self.command, **args)


class ComposeFileLogs(LazyFunction):
Expand Down
14 changes: 12 additions & 2 deletions datadog_checks_dev/tests/test_docker.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,11 +36,21 @@ def test_up(self):


class TestDockerRun:
def test_compose_file(self):
@pytest.mark.parametrize(
"capture",
[
None,
True,
],
)
def test_compose_file(self, capture):
compose_file = os.path.join(DOCKER_DIR, 'test_default.yaml')

try:
with docker_run(compose_file):
args = {}
if capture is not None:
args['capture'] = capture
with docker_run(compose_file, **args):
assert compose_file_active(compose_file) is True
assert compose_file_active(compose_file) is False
finally:
Expand Down
1 change: 1 addition & 0 deletions datadog_cluster_agent/changelog.d/18867.added
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Add telemetry scraping for Validation AdmissionController
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
'admission_webhooks_library_injection_attempts': 'admission_webhooks.library_injection_attempts',
'admission_webhooks_library_injection_errors': 'admission_webhooks.library_injection_errors',
'admission_webhooks_mutation_attempts': 'admission_webhooks.mutation_attempts',
'admission_webhooks_validation_attempts': 'admission_webhooks.validation_attempts',
'admission_webhooks_patcher_attempts': 'admission_webhooks.patcher.attempts',
'admission_webhooks_patcher_completed': 'admission_webhooks.patcher.completed',
'admission_webhooks_patcher_errors': 'admission_webhooks.patcher.errors',
Expand Down
3 changes: 2 additions & 1 deletion datadog_cluster_agent/metadata.csv
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,8 @@ datadog.cluster_agent.admission_webhooks.reconcile_errors,gauge,,,,Number of rec
datadog.cluster_agent.admission_webhooks.reconcile_success,gauge,,success,,Number of reconcile successes per controller,0,datadog_cluster_agent,admission webhooks reconcile success,
datadog.cluster_agent.admission_webhooks.response_duration.count,count,,,,Webhook response duration count,0,datadog_cluster_agent,webhook response duration count,
datadog.cluster_agent.admission_webhooks.response_duration.sum,count,,second,,Webhook response duration sum,0,datadog_cluster_agent,webhook response duration sum,
datadog.cluster_agent.admission_webhooks.webhooks_received,gauge,,,,Number of mutation webhook requests received,0,datadog_cluster_agent,admission webhooks received,
datadog.cluster_agent.admission_webhooks.validation_attempts,gauge,,,,Number of pod validation attempts by validation type,0,datadog_cluster_agent,admission webhooks validation attempts,
datadog.cluster_agent.admission_webhooks.webhooks_received,gauge,,,,Number of webhook requests received,0,datadog_cluster_agent,admission webhooks received,
datadog.cluster_agent.aggregator.flush,count,,,,"Number of metrics/service checks/events flushed by (data_type, state)",0,datadog_cluster_agent,aggregator flush,
datadog.cluster_agent.aggregator.processed,count,,,,Amount of metrics/services_checks/events processed by the aggregator by data_type,0,datadog_cluster_agent,aggregator processed,
datadog.cluster_agent.api_requests,count,,request,,"Requests made to the cluster agent API by (handler, status)",0,datadog_cluster_agent,api requests,
Expand Down
5 changes: 4 additions & 1 deletion datadog_cluster_agent/tests/fixtures/metrics.txt
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,9 @@ admission_webhooks_mutation_attempts{error="",injected="true",mutation_type="age
admission_webhooks_mutation_attempts{error="",injected="true",mutation_type="agent_sidecar",status="success"} 1
admission_webhooks_mutation_attempts{error="",injected="true",mutation_type="cws_pod_instrumentation",status="success"} 2
admission_webhooks_mutation_attempts{error="",injected="true",mutation_type="lib_injection",status="success"} 1
# HELP admission_webhooks_validation_attempts Number of pod validation attempts by validation type
# TYPE admission_webhooks_validation_attempts gauge
admission_webhooks_validation_attempts{error="",validated="true",webhook_name="kubernetes_audit",status="success"} 1
# HELP admission_webhooks_reconcile_errors Number of reconcile errors per controller.
# TYPE admission_webhooks_reconcile_errors gauge
admission_webhooks_reconcile_errors{controller="secrets"} 5
Expand All @@ -34,7 +37,7 @@ admission_webhooks_response_duration_bucket{le="10"} 108
admission_webhooks_response_duration_bucket{le="+Inf"} 108
admission_webhooks_response_duration_sum 0.4897835529999999
admission_webhooks_response_duration_count 108
# HELP admission_webhooks_webhooks_received Number of mutation webhook requests received.
# HELP admission_webhooks_webhooks_received Number of webhook requests received.
# TYPE admission_webhooks_webhooks_received gauge
admission_webhooks_webhooks_received 300
# HELP aggregator__dogstatsd_contexts Count the number of dogstatsd contexts in the aggregator
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
'admission_webhooks.library_injection_attempts',
'admission_webhooks.library_injection_errors',
'admission_webhooks.mutation_attempts',
'admission_webhooks.validation_attempts',
'admission_webhooks.patcher.attempts',
'admission_webhooks.patcher.completed',
'admission_webhooks.patcher.errors',
Expand Down
6 changes: 3 additions & 3 deletions mongo/assets/monitors/high_connections.json
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
{
"version": 2,
"created_at": "2020-08-05",
"last_updated_at": "2021-01-11",
"last_updated_at": "2024-10-16",
"title": "Connection pool is reaching saturation",
"tags": [
"integration:mongodb"
],
"description": "A connection pool helps reduce application latency and the number of times new connections are created. This monitor tracks the number of incoming connections to alert when the connection pool is near the saturation point.",
"definition": {
"message": "The number of incoming connections is reaching the maximum. {{value}} % of the available connections have been used on {{replset_name.name}}",
"message": "The number of incoming connections is reaching the maximum. {{value}} % of the available connections have been used on MongoDB Cluster {{clustername.name}} Replica Set {{replset_name.name}}",
"name": "[MongoDB] High incoming connections",
"options": {
"escalation_message": "",
Expand All @@ -26,7 +26,7 @@
},
"timeout_h": 0
},
"query": "avg(last_5m):100 * sum:mongodb.connections.current{*} by {replset_name} / ( sum:mongodb.connections.current{*} by {replset_name} + sum:mongodb.connections.available{*} by {replset_name} ) > 90",
"query": "avg(last_5m):100 * sum:mongodb.connections.current{*} by {clustername,replset_name} / ( sum:mongodb.connections.current{*} by {clustername,replset_name} + sum:mongodb.connections.available{*} by {clustername,replset_name} ) > 90",
"tags": [
"integration:mongodb"
],
Expand Down
35 changes: 35 additions & 0 deletions mongo/assets/monitors/high_fsstorage_usage.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
{
"version": 2,
"created_at": "2024-10-16",
"last_updated_at": "2024-10-16",
"title": "Used file system storage is reaching capacity",
"tags": [
"integration:mongodb"
],
"description": "This monitor tracks the used file system storage on a MongoDB server to alert when it is reaching capacity.",
"definition": {
"message": "The used file system storage is reaching capacity for database host {{database_instance.name}} on MongoDB Cluster {{clustername.name}}. {{value}} % of the total storage has been used.",
"name": "[MongoDB] High file system storage usage",
"options": {
"escalation_message": "",
"include_tags": true,
"locked": false,
"new_host_delay": 300,
"no_data_timeframe": null,
"notify_audit": false,
"notify_no_data": false,
"renotify_interval": "0",
"require_full_window": true,
"thresholds": {
"critical": 80,
"warning": 70
},
"timeout_h": 0
},
"query": "avg(last_60m):100 * avg:mongodb.stats.fsusedsize{*} by {clustername,database_instance} / avg:mongodb.stats.fstotalsize{*} by {clustername,database_instance} > 80",
"tags": [
"integration:mongodb"
],
"type": "query alert"
}
}
35 changes: 35 additions & 0 deletions mongo/assets/monitors/high_replication_lag.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
{
"version": 2,
"created_at": "2024-10-16",
"last_updated_at": "2024-10-16",
"title": "High replication lag",
"tags": [
"integration:mongodb"
],
"description": "This monitor tracks the replication lag on a MongoDB replica set to alert when it is high.",
"definition": {
"message": "MongoDB Cluster {{clustername.name}} member {{member.name}} replication lag is high. The replication lag is {{value}} seconds.",
"name": "[MongoDB] High replication lag",
"options": {
"escalation_message": "",
"include_tags": true,
"locked": false,
"new_host_delay": 300,
"no_data_timeframe": null,
"notify_audit": false,
"notify_no_data": false,
"renotify_interval": "0",
"require_full_window": true,
"thresholds": {
"critical": 120,
"warning": 60
},
"timeout_h": 0
},
"query": "avg(last_5m):100 * avg:mongodb.replset.optime_lag{*} by {clustername,member} > 120",
"tags": [
"integration:mongodb"
],
"type": "query alert"
}
}
35 changes: 35 additions & 0 deletions mongo/assets/monitors/low_oplog_window.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
{
"version": 2,
"created_at": "2024-10-16",
"last_updated_at": "2024-10-16",
"title": "Low oplog window",
"tags": [
"integration:mongodb"
],
"description": "This monitor tracks the oplog window on a MongoDB replica set to alert when it is insufficient.",
"definition": {
"message": "Oplog window for database host {{database_instance.name}} on MongoDB Cluster {{clustername.name}} is below the threshold. The oplog window is {{value}} seconds.",
"name": "[MongoDB] Low oplog window",
"options": {
"escalation_message": "",
"include_tags": true,
"locked": false,
"new_host_delay": 300,
"no_data_timeframe": null,
"notify_audit": false,
"notify_no_data": false,
"renotify_interval": "0",
"require_full_window": true,
"thresholds": {
"critical": 3600,
"warning": 7200
},
"timeout_h": 0
},
"query": "avg(last_60m):100 * avg:mongodb.oplog.timediff{*} by {clustername,database_instance} < 3600",
"tags": [
"integration:mongodb"
],
"type": "query alert"
}
}
Loading
Loading