-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Significant memory leakage of pubsub provider #489
Comments
This provider repo does not have enough maintainers to address every issue. Since there has been no activity in the last 90 days it is now marked as |
This issue is being closed since there has been no activity for 14 days since marking it as |
/fresh ? It's still a problem :-( |
Is there an existing issue for this?
Affected Resource(s)
xpkg.upbound.io/upbound/provider-gcp-pubsub
Resource MRs required to reproduce the bug
k describe provider provider-gcp-pubsub
k get pod provider-gcp-pubsub-4f8a71eab319-85688d99c-t5pwq -o=yaml -n=crossplane-system
Steps to Reproduce
topics.pubsub.gcp.upbound.io
), and c. 100 subscriptions (subscription.pubsub.gcp.upbound.io
) + related IAM (c. 100topiciammembers.pubsub.gcp.upbound.io
+ c. 100subscriptioniammembers.pubsub.gcp.upbound.io
)What happened?
The pubsub provider's pod memory usage grows up to c. 20GB throughout the day (then gets OOM-ed).
This is true for both provider v1.0.1 + crossplane 1.15.1, and for provider v1.0.0 + crossplane 1.15.
Ssh-ing into the pod and running
top
confirms that all memory is used by theprovider
application (note - the below screenshots were not taken at the same time, hence the different memory usage reported).See:
Note that:
crossplane-system
(crossplane, crossplane-rbac-manager, and provider-gcp-cloudplatform) are behaving fine under this configuration.I have not had a chance to recompile the provider with pprof enabled to investigate further.
Relevant Error Output Snippet
Note that the system is otherwise working fine, all topics and subscriptions are shown to be ready and in-sync, and there are no logs produced by the crossplane stack, indicating any issues.
Crossplane Version
1.15.1
Provider Version
1.0.1
Kubernetes Version
v1.28.5-gke.1217000
Kubernetes Distribution
GKE
Additional Info
Alas, my initial hope that plugging-in the debug logging (Issue 471 already kindly addressed) would also resolve the memory leak did not come true :-(.
The text was updated successfully, but these errors were encountered: