Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not OOM due to CountersCache.Batcher getting too big #295

Closed
alexsnaps opened this issue Apr 24, 2024 · 0 comments
Closed

Do not OOM due to CountersCache.Batcher getting too big #295

alexsnaps opened this issue Apr 24, 2024 · 0 comments
Labels
kind/enhancement New feature or request

Comments

@alexsnaps
Copy link
Member

We need to either explicitly bound that data structure or bound it thru heuristics.
Today, it'll grow forever, most importantly when partitioned... It could certainly be compacted (e.g. expired AtomicExpiringValue) it is mostly the key space that's the issue, as it is derived from the user's defined limits and could be really big (e.g. many conditions, large variables, ...)

@alexsnaps alexsnaps added the kind/enhancement New feature or request label Apr 24, 2024
@alexsnaps alexsnaps changed the title Do not OOM due to CachedRedisStorage.CachedRedisStorage getting too big Do not OOM due to CountersCache.Batcher getting too big Apr 30, 2024
chirino added a commit to chirino/limitador that referenced this issue Apr 30, 2024
chirino added a commit to chirino/limitador that referenced this issue Apr 30, 2024
chirino added a commit to chirino/limitador that referenced this issue Apr 30, 2024
alexsnaps pushed a commit to alexsnaps/limitador that referenced this issue Apr 30, 2024
chirino added a commit that referenced this issue May 15, 2024
fixes #295: Use a semaphore to protect the Batcher from unbounded memory growth.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement New feature or request
Projects
Status: Done
Development

No branches or pull requests

1 participant