-
Notifications
You must be signed in to change notification settings - Fork 584
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance and efficiency improvements in daemon/server mode #1154
Conversation
b07f965
to
bab52cb
Compare
8663398
to
2053f6b
Compare
pkg/types/conf.go
Fixed
Jitter: 0.1, | ||
} | ||
if utilwait.ExponentialBackoff(backoff, func() (bool, error) { | ||
_, err := os.Stat(readinessIndicatorFile) |
Check failure
Code scanning / CodeQL
Uncontrolled data used in path expression High
user-provided value
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This commit was dropped and the code referenced is no longer in this PR.
4be8d0f
to
f7ed810
Compare
065803b
to
93a9089
Compare
We want the in-cluster client that the multus server uses to use the same client config (QPS, protobuf, grpc, etc) as the regular client. Signed-off-by: Dan Williams <[email protected]>
Multus is a pretty critical piece of infrastructure, so it shouldn't be subject to the same lower QPS limits as most components are. Signed-off-by: Dan Williams <[email protected]>
Then we can just use the Server struct kube client and exec rather than passing them through the function parameters. Signed-off-by: Dan Williams <[email protected]>
We'll need these for the next commit. Signed-off-by: Dan Williams <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
Move server start code to a common function that both regular and test code can use. Also shut down the server from the testcases. Signed-off-by: Dan Williams <[email protected]>
fbfe495
to
8212539
Compare
if clientInfo == nil { | ||
return nil, fmt.Errorf("failed to create in-cluster kube client") | ||
} | ||
return clientInfo, err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@s1061123 just want to make sure this approach works with your incoming changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm OK to move forward with this changeset as-is. Overall this looks like great improvements to the thick plugin approach and it's nice to get another take on it.
I would like to get Tomo's take on it as well, especially in consideration of some incoming changes that Tomo has staged.
Thanks for breaking it into commits, let's not squash those when we merge as these are rather logical. Also, thanks on the thorough refactoring for changed parameter counts, unused exports as well as testing.
8212539
to
04170cd
Compare
Fixed up the codecheck comments (missing function doc and an if/else simplification). No functional changes. |
/cc This is really good. Conceptually really on-board w/ this, especially using protobuf instead of JSON, and informers over frying the K8S API. I'll take a look ASAP. |
…access When running in server mode we can use a shared informer to listen for Pod events from the apiserver, and grab pod info from that cache rather than doing direct apiserver requests each time. This reduces apiserver load and retry latency, since multus can poll the local cache more frequently than it should do direct apiserver requests. Oddly static pods don't show up in the informer by the timeout and require a direct apiserver request. Since static pods are not common and are typically long-running, it should not be a big issue to fall back to direct apiserver access for them. Signed-off-by: Dan Williams <[email protected]>
A couple of the setup variables for NewManager*() are already in the multus config that it gets passed, so use those instead of passing explicitly. Signed-off-by: Dan Williams <[email protected]>
Simplify setup by moving the post-creation operations like GenerateConfig() and PersistMultusConfig() into a new Start() function that also begins watching the configuration directory. This better encapsulates the manager functionality in the object. We can also get rid of the done channel passed to the config manager and just use the existing WaitGroup to determine when to exit the daemon main(). Signed-off-by: Dan Williams <[email protected]>
The test was comparing the same configuration to itself, since nothing in the changed CNI configuration is used in the written multus configuration. Instead make sure the updated CNI config contains something that will be reflected in the written multus configuration, and while we're there use a more robust way to wait for the config to be written via gomega.Eventually(). Signed-off-by: Dan Williams <[email protected]>
…dule Signed-off-by: Dan Williams <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
…Manager For whatever reason calling os.Stat() on the readiness indicator file from CmdAdd()/CmdDel() when multus is running in server mode and is containerized often returns "file not found", which triggers the polling behavior of GetReadinessIndicatorFile(). This greatly delays CNI operations that should be pretty quick. Even if an exponential backoff is used, os.Stat() can still return "file not found" multiple times, even though the file clearly exists. But it turns out we don't need to check the readiness file in server mode when running with MultusConfigFile == "auto". In this mode the server starts the ConfigManager which (a) waits until the file exists and (b) fsnotify watches the readiness and (c) exits the daemon immediately if the file is deleted or moved. This means we can assume that while the daemon is running and the server is handling CNI requests that the readiness file exists; otherwise the daemon would have exited. Thus CmdAdd/CmdDel don't need to run a lot of possibly failing os.Stat() calls in the CNI hot paths. Signed-off-by: Dan Williams <[email protected]>
04170cd
to
d9c06e9
Compare
One thing to note is that static pods don't seem to show up via the informer, but are preset in the apiserver when queried directly. Server-side pod watch filtering doesn't seem to be the issue since this happened even before I added the |
Good catch. IMO it is perfectly reasonable to fallback to GETting the pod via API if not found in the informer cache. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm! thank you!
When running in server mode we can make a number of performance and efficiency improvements:
@s1061123 @dougbtv