-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Graphouse drops incoming metrics during a warming-up #112
Comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hello.
I've recognized, that graphouse doesn't save all incoming metrics until his (?) metrics tree won't fill. It looks like MetricCacher does a lot of side logic synchronously and couldn't accept everything. Increasing memory consumption and deep of in-memory tree does decrease the time of warming-up, but the main problem is data losing.
Here's my custom properties:
And vmoptions:
Metrics statistic:
2019-03-16 13:16:56,786 INFO [MetricSearch MetricSearch thread] Actual metrics count = 2491274, dir count: 653674, cache stats: CacheStats{hitCount=34678346, missCount=5402061, loadSuccessCount=175284, loadExceptionCount=24, totalLoadTime=1749975810283, evictionCount=0}
This is how the number of saved metrics looks during the warming:
graphouse.tree.in-memory-levels=6
graphouse.tree.in-memory-levels=7
During warming, I also see a lot of next stack-traces:
This is side problem: ru.yandex.market.graphouse.search.tree.DirContentBatcher.loadDirContent always executes this code and doesn't create batches. This overload ClickHouse a lot and cause DB side lags.
The text was updated successfully, but these errors were encountered: