-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: support aws and other OIDC authentication methods kubeconfig (#521) #564
Conversation
CLA Assistant Lite bot All contributors have signed the CLA ✍️ ✅ |
I have read the CLA Document and I hereby sign the CLA |
recheck |
@CirillaQL Your actions are so fast ⚡️. If you have completed the development, we will start reviewing the code and testing it. cc @ruquanzhao |
Thanks, I updated the Dockerfile and now it works for me. Users just need to add their AK/SK in their kubeconfig's env. I think the user guide should be updated to reflect this. |
@CirillaQL Okay, we will start to verify this feature tomorrow, and finally update the document to explain it. |
Hi, @CirillaQL , welcome and thanks for your effort! I spent some time on my aws cluster so it's pretty late to reply😹
This is the most important thing to discuss. Let me provide some ideas.
cc @elliotxx |
@CirillaQL BTW, If you're not familiar with front-end development, feel free to let us know, and we can handle the front-end development. |
Hi @ruquanzhao , thanks your feedback.
Sure, different clusters may be under different IAMs, and that's why I mentioned:
The AWS CLI will read the AK/SK from the kubeconfig env, rather than from the container env. (At first, I didn't recognize this issue either, so I tried putting the AK/SK in the Deployment container's environment. I realized the problem when I tried to add another cluster.) By adding the AK/SK in the kubeconfig environment, it allows one kubeconfig file to carry all cluster information, eliminating the need for an additional front page to handle AK/SK or other credentials. The potential issues I can identify are:
So, I think this is a design problem. I’d like to know which approach you prefer: putting the AK/SK in the kubeconfig or adding a front page to let users input them?
Sure, we can use the aws-iam-authenticator package to generate AWS STS and Session Tokens to connect to the cluster with AK/SK. However, the biggest problem is that the token would expire after 15 minutes (kubernetes-sigs/aws-iam-authenticator#523). This means we would need to refresh it, and I’m not sure how that would affect the informer and other controllers. Additionally, we would need to do a lot of work to support different providers' authentication methods. Considering that we are already using kubeconfig to connect to clusters, I think it might be a bit odd to use an Auth package and AK/SK to connect to other clusters rather than execConfig. The advantage of using a token is that we wouldn’t need to install AWS or GCP CLI tools in the container. So again, let me know which approach you prefer. These are some AWS/GCP client-go authentication problems/methods/packages I have researched: kubernetes-sigs/aws-iam-authenticator#523 (comment) Please let me know which methods you think are better for Karpor. |
LGTM and I have verified this. cc @elliotxx , Please do final check |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
@CirillaQL Your feedback is very valuable! I discussed it with @ruquanzhao in the morning. We don't seem to need to overthink the problem in the early stages. If users use the method of uploading KubeConfig, they should trust Karpor. If the user is worried about security, they can create an AK and SK with small permissions. So, we have merged this PR. But, I think it can be supported directly from the cloud platform to directly introduce the cluster, which is more secure. And then, we will create a beta version on the master for complete validation. If there are no problem, this feature will appear in v0.4.8. Thank you for your professional contribution! |
@CirillaQL Welcome to become the Karpor contributor, looking forward to your more participation! 🎉 #585 |
What type of PR is this?
/kind feature
What this PR does / why we need it:
This PR allows users to upload kubeconfig with an AWS EKS cluster (GCP GKE is also supported). Now, we can see the AWS cluster on the Cluster Management Page.
However, AWS requires
aws-cli
,ACCESS_KEY
, andSECRET_ACCESS_KEY
to verify user information. This means that the Dockerfile should includeRUN apk add --no-cache aws-cli
and configure AWS settings. I am not sure how to handle this problem. Should we change the Helm Chart or let the user input ACCESS_KEY and SECRET_ACCESS_KEY? Therefore, I did not change the Dockerfile and other files.I have read the CLA Document and I hereby sign the CLA
Which issue(s) this PR fixes:
Fixes #521