-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bug: Number of goroutines increases linearly with time #85
Comments
Looks like my PR #86 works. I did some experiment to check the number of goroutines of the ocm status addon, during the process of repeatedly creating and deleting downsynced workload objects. This experiment was first done on ocm status addon Same experiment was then done on the merging branch of #86. To observe the stable number of goroutines. On
|
# of the test runs | # of goroutines after the test run |
---|---|
1 | 96 |
2 | 98 |
3 | 101 |
4 | 104 |
5 | 106 |
6 | 110 |
7 | 113 |
8 | 116 |
9 | 119 |
10 | 122 |
The number of goroutines increases by 3 in average after each run.
On the merging branch memleak
# of the test runs | # of goroutines after the test run |
---|---|
1 | 90 |
2 | 90 |
3 | 90 |
4 | 90 |
5 | 90 |
6 | 90 |
7 | 90 |
8 | 90 |
9 | 90 |
10 | 90 |
The number of goroutines is stable.
Details
Here are the detailed steps of the experiment, on v0.2.0-rc11
.
- Run the ginkgo e2e test case of KubeStellar on a fresh setup.
ginkgo --focus "singleton status creation and deletion" -v
- Port-forward the metrics of the status addon in cluster1.
kubectl --context cluster1 port-forward -n open-cluster-management-agent-addon deploy/status-agent 8080
Repeat steps 3 and 4 for ten times and record the number of goroutines.
- Rerun the e2e test case on the same setup.
ginkgo --focus "singleton status creation and deletion" -v -- --skip-setup
- Check the number of goroutines.
$ curl http://localhost:8080/metrics --silent | grep goroutines
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 90
I published my own container image and helm chart for the status addon, which are built on the merging branch of my PR.
In order to perform the experiment on the merging branch, we need to use my image and chart. To use them, temporarily modify two pointers in KubeStellar code, as shown below, then redo the steps.
$ git diff main memleak
diff --git a/config/postcreate-hooks/ocm.yaml b/config/postcreate-hooks/ocm.yaml
index fb81f5a09..cc291f160 100644
--- a/config/postcreate-hooks/ocm.yaml
+++ b/config/postcreate-hooks/ocm.yaml
@@ -31,9 +31,9 @@ spec:
- upgrade
- --install
- status-addon
- - oci://ghcr.io/kubestellar/ocm-status-addon-chart
+ - oci://ghcr.io/waltforme/ocm-status-addon-chart
- --version
- - v0.2.0-rc11
+ - v0.2.0-pull86
- --namespace
- open-cluster-management
- --create-namespace
Describe the bug
@dumb0002 has a nice visualization of the bug during his experiment:
Steps To Reproduce
https://github.com/kubestellar/kubestellar/tree/main/monitoring has comprehensive documentations on monitoring.
Expected Behavior
The number of goroutines should not increase with time.
Additional Context
This may be related to the long-standing issue kubestellar/kubestellar#2040.
The text was updated successfully, but these errors were encountered: