Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

computation and time needed for benchmarks (and winners submissions) #9

Open
rquintino opened this issue Oct 14, 2020 · 1 comment
Open

Comments

@rquintino
Copy link

Hi there, first of all, much thanks for all the effort and insights resulting from this competition
(now deep diving on findings paper). Amazing work and contribution!

I was looking for one thing couldn't find so far, would be possible to know or have an idea of compute and time needed by the benchmarks and winning submissions? In practice, it's a relevant dimension for evaluating different approaches.

Example: if I understood properly for exp smooth bottom up, fit was run ~30k times? (number of time series at maximum level). From the code done in parallel I think, but still, prob takes some time.

Would be great to get any info on this.

thanks!
(from https://github.com/Mcompetitions/M5-methods/blob/60829cf13c8688b164a7a2fc8c4832cc216bdbec/validation/Point%20Forecasts%20-%20Benchmarks.R)

@rquintino
Copy link
Author

just adding for anyone interested in the thread, this related paper has much more information regarding compute needed for similar approaches (stat/ml), not the same dataset but very interesting read.

"Comparison of statistical and machine learning methods for daily SKU demand forecasting"
https://www.researchgate.net/publication/344374729_Comparison_of_statistical_and_machine_learning_methods_for_daily_SKU_demand_forecasting

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant