Hi there, first of all, much thanks for all the effort and insights resulting from this competition
(now deep diving on findings paper). Amazing work and contribution!
I was looking for one thing couldn't find so far, would be possible to know or have an idea of compute and time needed by the benchmarks and winning submissions? In practice, it's a relevant dimension for evaluating different approaches.
Example: if I understood properly for exp smooth bottom up, fit was run ~30k times? (number of time series at maximum level). From the code done in parallel I think, but still, prob takes some time.
Would be great to get any info on this.
thanks!
(from https://github.com/Mcompetitions/M5-methods/blob/60829cf13c8688b164a7a2fc8c4832cc216bdbec/validation/Point%20Forecasts%20-%20Benchmarks.R)
Hi there, first of all, much thanks for all the effort and insights resulting from this competition
(now deep diving on findings paper). Amazing work and contribution!
I was looking for one thing couldn't find so far, would be possible to know or have an idea of compute and time needed by the benchmarks and winning submissions? In practice, it's a relevant dimension for evaluating different approaches.
Example: if I understood properly for exp smooth bottom up, fit was run ~30k times? (number of time series at maximum level). From the code done in parallel I think, but still, prob takes some time.
Would be great to get any info on this.
thanks!
(from https://github.com/Mcompetitions/M5-methods/blob/60829cf13c8688b164a7a2fc8c4832cc216bdbec/validation/Point%20Forecasts%20-%20Benchmarks.R)