-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
computation and time needed for benchmarks (and winners submissions) #9
Comments
just adding for anyone interested in the thread, this related paper has much more information regarding compute needed for similar approaches (stat/ml), not the same dataset but very interesting read. "Comparison of statistical and machine learning methods for daily SKU demand forecasting" |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi there, first of all, much thanks for all the effort and insights resulting from this competition
(now deep diving on findings paper). Amazing work and contribution!
I was looking for one thing couldn't find so far, would be possible to know or have an idea of compute and time needed by the benchmarks and winning submissions? In practice, it's a relevant dimension for evaluating different approaches.
Example: if I understood properly for exp smooth bottom up, fit was run ~30k times? (number of time series at maximum level). From the code done in parallel I think, but still, prob takes some time.
Would be great to get any info on this.
thanks!
(from https://github.com/Mcompetitions/M5-methods/blob/60829cf13c8688b164a7a2fc8c4832cc216bdbec/validation/Point%20Forecasts%20-%20Benchmarks.R)
The text was updated successfully, but these errors were encountered: