You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The above config works in v1.1.0. We downgraded, kept all other config the same, and it worked. So I suspect this is caused by some behavior change between v1.1.0 and v1.10.0.
I'm not sure if this is related, but the EC2 VM hosting the Kubernetes Node on which the upbound provider was running is using IMDSv2.
This is potentially related to #1252, but we are not using EKS IRSA credentials. We are using kube2iam provided credentials.
The text was updated successfully, but these errors were encountered:
This provider repo does not have enough maintainers to address every issue. Since there has been no activity in the last 90 days it is now marked as stale. It will be closed in 14 days if no further activity occurs. Leaving a comment starting with/fresh will mark this issue as not stale.
This issue is being closed since there has been no activity for 14 days since marking it as stale. If you still need help, feel free to comment or reopen the issue!
Is there an existing issue for this?
Affected Resource(s)
Resource MRs required to reproduce the bug
Steps to Reproduce
upbound/provider-aws-ec2
provider (link)What happened?
The statuses of both MRs fail with the error message below.
Relevant Error Output Snippet
Crossplane Version
v1.15.0
Provider Version
v1.10.0
Kubernetes Version
v1.27.14
Kubernetes Distribution
Home Rolled (kubeadm)
Additional Info
iam.amazonaws.com/role
annotation (docs here)upbound/provider-aws-ec2
Pod via the following ControllerConfig + Provider objects:This is potentially related to #1252, but we are not using EKS IRSA credentials. We are using kube2iam provided credentials.
The text was updated successfully, but these errors were encountered: