-
Notifications
You must be signed in to change notification settings - Fork 324
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recent updates possibly broke CLI execute-command
#435
Comments
Example output from
|
Exact output for everyone with this problem as far as I can tell ☝🏼 |
This looks related: aws-containers/amazon-ecs-exec-checker#49 Do you also have |
If my parsing of the terraform config can be trusted, we are not setting that in I'll try removing this now and see if that makes a difference. |
Holy moly that worked! That said, we actually actively use those credentials in our task, so we'll need a workaround for exposing them. Still seems like setting these env vars shouldn't have this effect right? |
Glad that worked! I'm waiting on more info regarding this and will post an update here. |
Renaming |
Can we revert to previous version of aws cli to fix this ? Because changing the environments will break other things in our tasks |
Facing this issue as well. As @nathando mentioned, would be great if it reverted to the previous behaviour so that we don't have to change the environment variables. |
Encountered this error: "An error occurred (TargetNotConnectedException) when calling the ExecuteCommand operation: The execute command failed due to an internal error. Try again later." out of nowhere 4 days ago. In my case i also had There is no need to change the environment variables through, all you need to do is to give the user (AWS_ACCESS_KEY_ID) permissions to allow the ECS exec command
|
Thanks @nicolasbuch, those requirements are also documented here: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html#ecs-exec-prerequisites as well as this troubleshooting article for the Those requirements aren’t new so I’m not sure why recent updates would be a factor here. Has anyone tried rolling back to a previous SSM Agent version to see if they still see this issue? It would help the team to have agent logs from a container that is experiencing the issue. You could provide those here or contact AWS Support. |
The agent version in ECS Exec is controlled by ECS during AMI build and they say they haven't changed the version recently. Can anyone here that encountered the issue and has removed
Also, are you seeing this issue on ECS on EC2 or Fargate? |
@Thor-Bjorgvinsson after making the change and removing the envvars I can access the containers and see the following versions according to the log output on Fargate tasks
|
@Thor-Bjorgvinsson Seeing the issue on Fargate. |
We also experience the same issue since last Friday (01 April 2022). We didn't change anything and the command execution stopped working. We also have AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in the env. Funny enough on one of our environments it still works but on 2 other stopped. |
We've confirmed that this is a SSM Agent issue in a recent Fargate deployment where the agent version was updated. Any new tasks started in Fargate will use a SSM Agent build with this issue. We are working with the Fargate team to deploy a fix for this. Mitigation as mentioned above, remove AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY from task definition environment variables |
this worked for me. |
@akhiljalagam I can confirm this can be used for mitigation today but not recommended, this will not be possible in the close future sometime after fix has been released. Agent will only be able to connect using ECS Task metadata service credentials. The recommended mitigation is to unset the |
@Thor-Bjorgvinsson, how can we follow the status of this? I don't want to be too pushy, but we're really blocked 😬. Is there some kind of prioritisation as this is a regression? Anyway, thanks for the work 💪 . |
We've pushed out a fix in agent release 3.1.1260.0 for this issue. We're currently working with related AWS services to integrate this fix; we'll add further updates as those integrations are completed. |
For other people who come across this issue, this error happens for us when we have AWS_SHARED_CREDENTIALS_FILE set as an environment variable as well. When it is removed, |
Hopefully this doesn't put a spanner in the works - but i've been having this issue across all of my services. Only 1 of the services actually had AWS env vars in them, after renaming those that service was fine. The others however, still respond with the same "Internal server error", with no AWS env vars to note on the tasks. |
I'm seeing this again since the 3.1.1260.0 release. Is it possible other env variable names are now disallowed? In particular, I changed my I'm wondering if the fix in 3.1.1260.0 was to switch from using |
maybe it's partially matching |
No error for me with |
Is there any update on when the fix will be rolled out? |
Renaming |
ECS released a new AMI with the updated SSM Agent (ecs optimized ami version 20220421), still pending Fargate release. |
Any news concerning the fargate release ? |
Without changing anything regarding the env variables I redeployed my ECS fargate instances and with the latest awscli this works fine now. |
Fargate has completed release of the new agent |
Hi Guys, I have used the ECS checker and this is the below result:
All the configuration seems to be okay... the AWS CLI version: 2.11.9 |
But the AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY variables are defined in the .env file inside the container. I hope that's not the issue. |
I'm experiencing this exact same issue. The |
Did anyone else stumbled to this problem again? We started getting this issue again. There are no AWS_ACCESS_KEY / SECRET defined and check-ecs-exec.sh shows everything OK (green and yellow)
|
I'm experiencing issues as well. I'm using Fargate and can start two tasks in the same subnet, one will work and the other will not. |
There are a number of github issues floating around on related repos that might be tied to recent ssm agent updates, though this is incredibly difficult to verify from our end, if someone could do a little investigating that would be great.
The general issue that manifests is an inability to run the
execute-command
via cli and aTargetNotConnectedException
thrown. Existing troubleshooting guides have thus far not yielded success.Related tickets:
aws/aws-cli#6834
aws/aws-cli#6562
aws-containers/amazon-ecs-exec-checker#47
The text was updated successfully, but these errors were encountered: