Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable refreshing of auth_provider bearer_token #606

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

hmcguire-shopify
Copy link

Ref: #393

Previously, once a Kubeclient::Context was created, auth_provider bearer_tokens would be set and never refreshed.

This commit enables automatic refreshing of auth_provider bearer_tokens by returning callables instead of static tokens. When used with the faraday client, the callable is automatically called for each request, and for the non-faraday client the callable is called before setting the header manually.

Tests have been updated to cover this functionality for both client types, and an additional test was added for refreshing bearer_token_file because it did not appear to be covered before.

Previously, once a Kubeclient::Context was created, auth_provider
bearer_tokens would be set and never refreshed.

This commit enables automatic refreshing of auth_provider bearer_tokens
by returning callables instead of static tokens. When used with the
faraday client, the callable is automatically called for each request,
and for the non-faraday client the callable is called before setting the
header manually.

Tests have been updated to cover this functionality for both client
types, and an additional test was added for refreshing
bearer_token_file because it did not appear to be covered before.

Co-authored-by: Jonathan Gnagy <[email protected]>
Co-authored-by: Hartley McGuire <[email protected]>
@hmcguire-shopify
Copy link
Author

Test failures appear to be because SSL certs need to be regenerated

@cben
Copy link
Collaborator

cben commented Mar 20, 2023

certs renewed, close-cycling to trigger CI.

Sorry for my silence. It's great that you're attacking this! 👏

disclaimer: I know very little about all this auth!

Performance / rate-limiting questions:

Is it OK to call GCPAuthProvider.token or OIDCAuthProvider.token on every request??
Do providers implement internal caching to make this efficient?

  • GCPCommandCredentials starts a gcloud subprocess. It does its own caching ✔️. But is shelling out on every request performant enough?

  • For Google::Auth, do we need some kind of "token store"?

  • use openid_connect gem's jwks caching feature #576 changes something related to OIDC caching? I don't know enough 🤷‍♂️

  • Refreshed OIDC tokens should be persisted to configuration file #409 says it's necessary to retain OIDC refresh tokens for next call. Ideally to persist them back into the kubeconfig(!) but at least within our process...
    AFAICT currently every call to OIDCAuthProvider.token creates a new OpenIDConnect::Client instance, and the previous instance's .refresh_token will be lost :-( Do we need to merge Persist refreshed tokens #408 or something similar first?

  • compatibility: Technically, if someone were using Config on its own, it may now return lambdas instead of concrete values. But that's not a blocker, in fact I think we should depart further from current API!

I see you're going for automagically making refreshing work for people that use currently documented recipe:

Kubeclient::Client.new(..., ssl_options: context.ssl_options, auth_options: context.auth_options)

API questions:

This PR only deals with token, so passing a single lambda from Config to Client can work.
But I worry that won't cover all #393 use cases.

  • exec process might return data going into auth_options[:bearer_token], OR (ssl_options[:client_cert] and ssl_options[:client_key]). And the only way to know which is to run it.

    But if we extend the pattern here to passing several separate lambdas, would Client have to call them all, and how would that translate to a single re-execution?

  • Some of the caching concerns above, e.g. some OIDC servers treating refresh token as single-use, make we wary of APIs separate lambdas. Now, OIDC actually fits this implementation's "single :bearer_token lambda" so this a strawman, more like an aesthetic feeling... But it possibly gets worse if you connect a single Config to multiple Client objects (very common due to API groups)?

It's perfectly fine to defer exec to a separate PR! But I do want to understand how the API would look and what's the evolution path.

API separation concern:

I also feel our API has grown messy due to conflating config parsing + auth...
We've been increasingly pushed auth "under the rug", partially into Config (e.g. each .context call re-obtains indepedent auth, though README refused to guarantee that) and partially into Client (re-reading :bearer_token_file)...

Eventually, there should be separate APIs — probably separate objects for just reading the config and extracting one context, vs. actively obtaining auth, and I anticipate at some point people will want to control caching e.g. call a force_renew method on the auth object.

  • The worst case scenario seems to be: Config with multiple contexts > each using OIDC which needs centralized refresh_token state < multiple Client objects.

I worry that if we make users rely on the existing recipe "Just Working", it may be more painful if eventually they need to switch to another API anyway?

Plus the existing recipe passing multiple fields out of context separately is just clunky.

TLDR: this might be mergeable but I'd like us to understand future evolution path. But if you did this out of obligation to keep compatibility, I encourage you to not feel constrained — if users need to opt-in to a new API for renewal, that's totally worth it!

@cben cben closed this Mar 20, 2023
@cben cben reopened this Mar 20, 2023
@cben
Copy link
Collaborator

cben commented Mar 20, 2023

cc @benlangfeld @jeremywadsack @timothysmith0609 @nov @dhstewart: I dumped some questions about this PR above, but you all know more about auth than I do, please help :))

With auth knowledge (and ability to test) being fragmented by provider, it's hard to make progress...
I don't want my "are we covering all use cases?" perspective to stall this but I suspect it's time to break out of the existing API mold.


Ooh one more point: current README recommends renewal by re-creating everything, even a new Config instance 🔥. (No idea if users actually followed this extreme advice.) I pushed for that in name of "future flexibility", as I didn't want to commit to things like "every .context calls obtains new auth". But now it occurs to me that's very bad wrt. #408 because it leaves no objects where refresh tokens from previous time could be saved 😟

  • From OOP perspective, I'd hate to introduce some kind of invisible global cache; my experience is that all libraries starting from that eventually need to give users some kind of explicit "Session" objects or other caching controls...
  • But the big picture goes beyond OOP: some providers already cache tokens outside our process! And IIUC the ultimate goal of Persist refreshed tokens #408 is persisting back to the kubeconfig file (same way kubectl does)?

[I've been saying "caching" but that's imprecise word for "keep this and don't come back without it"]

This was referenced Mar 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants