-
Notifications
You must be signed in to change notification settings - Fork 132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(pod-identity): add option for pod-identity #1459
feat(pod-identity): add option for pod-identity #1459
Conversation
Signed-off-by: Christopher Haar <[email protected]>
Signed-off-by: Christopher Haar <[email protected]>
to use this you need some prerequisites which are implemented here: https://github.com/haarchri/configuration-aws-eks-uxp-podidentity you need to run deploy an EKS AddOn like:
prepare the IAM Role + PodIdentity:
that the provider can use the PodIdentity feature you need a DeploymentRuntimeConfig like:
when you install the providers you need to reference the DeploymentRuntimeConfig:
and then create simple a ProviderConfig:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @haarchri for taking this. LGTM in general.
Do you think we need a caching mechanism for PodIdentity like with IRSA? As you might already know, previously IRSA suffered from excessive calls to STS before the cache implementation.
Also a side note, we were on the verge of merging #1320 which had e2e tests for different provider configs. We should adapt your test setup to the e2e tests as a follow-up
@erhancagirici compared to official documentation i can see: The EKS Pod Identity agent (which is running in the EKS Cluster as an AddOn) will do SigV4 signing and make a call to the new EKS Auth API AssumeRoleForPodIdentity to exchange the projected token for temporary IAM credentials, which are then made available to the pod. and on the pods we will see:
so i don't see a need for cache atm - wdyt |
If 1, I think we are good. If 2, that seems like the IRSA case where AWS injects a similar token for exchanging, and we do the exchange at each reconcile. In that case, we need to "cache the credentials cache" so that we do not lose them between reconciles. If you have some setup already, it would be nice to check the contents of the projected volume and possibly check CloudTrail logs for Also realized some UX edge cases, IRSA and PodIdentity should be mutually exclusive. The code loads the default AWS client config for both IRSA and PodIdentity, which leaves the config resolution to the AWS code. |
https://docs.aws.amazon.com/eks/latest/userguide/pod-id-how-it-works.html so what we get injected in the Container is the following:
Kubernetes selects which node to run the pod on. Then, the Amazon EKS Pod Identity Agent on the node uses the AssumeRoleForPodIdentity action to retrieve temporary credentials from the EKS Auth API. The EKS Pod Identity Agent makes these credentials available for the AWS SDKs that you run inside your containers. means we calling in every loop the EKS Pod Identity Agent and the caching for Credentials happening in this agent - nothing need to take care in my point of view the agent which is running inside EKS Cluster is located here: https://github.com/aws/eks-pod-identity-agent/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the detailed explanation! I approve the current state with follow-ups.
IRSA and PodIdentity should be mutually exclusive. The code loads the default AWS client config for both IRSA and PodIdentity, which leaves the config resolution to the AWS code.
We should follow-up with a documentation regarding IRSA and PodIdentity configuration cannot co-exist in a provider.
Also #1320 should be updated with the PodIdentity
Description of your changes
spec.credentials.source
PodIdentityFixes #1249 #1254 #1308 #1252
I have:
make reviewable
to ensure this PR is ready for review.backport release-x.y
labels to auto-backport this PR if necessary.How has this code been tested
spin up a Network + EKS Cluster:
connect to your EKS Cluster:
apply the following resources: