You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running the checker, I get the following jq error:
-------------------------------------------------------------
Prerequisites for check-ecs-exec.sh v0.7
-------------------------------------------------------------
jq | OK (/opt/homebrew/bin/jq)
AWS CLI | OK (/opt/homebrew/bin/aws)
-------------------------------------------------------------
Prerequisites for the AWS CLI to use ECS Exec
-------------------------------------------------------------
AWS CLI Version | OK (aws-cli/2.15.17 Python/3.11.7 Darwin/23.0.0 source/arm64 prompt/off)
Session Manager Plugin | OK (1.2.553.0)
-------------------------------------------------------------
Checks on ECS task and other resources
-------------------------------------------------------------
Region : eu-central-1
Cluster: REDACTED
Task : REDACTED
-------------------------------------------------------------
Cluster Configuration | Audit Logging Not Configured
Can I ExecuteCommand? | arn:aws:iam::xxxxxxxxxxxxx:user/[email protected]
ecs:ExecuteCommand: allowed
ssm:StartSession denied?: allowed
Task Status | RUNNING
Launch Type | Fargate
Platform Version | 1.4.0
Exec Enabled for Task | OK
Container-Level Checks |
----------
Managed Agent Status
----------
jq: error (at <stdin>:173): Cannot iterate over null (null)
I found out that not all containers have a managedAgent property. I was able to fix it by changing line 422 to
This is of course only a quick fix. The underlying issue is that we have AWS GuardDuty enabled. GuardDuty injects a container into each task but those GuardDuty containers do not have a managedAgent.
This is how the container comes back after describing it:
When running the checker, I get the following jq error:
I found out that not all containers have a managedAgent property. I was able to fix it by changing line 422 to
agentsStatus=$(echo "${describedTaskJson}" | jq -r ".tasks[0].containers[] | (.managedAgents // [])[].lastStatus // \"FallbackValue\"")
This is of course only a quick fix. The underlying issue is that we have AWS GuardDuty enabled. GuardDuty injects a container into each task but those GuardDuty containers do not have a managedAgent.
This is how the container comes back after describing it:
The text was updated successfully, but these errors were encountered: