Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vignette on approximations and speedups #629

Open
sbfnk opened this issue Mar 28, 2024 · 5 comments · May be fixed by #695
Open

Vignette on approximations and speedups #629

sbfnk opened this issue Mar 28, 2024 · 5 comments · May be fixed by #695
Assignees

Comments

@sbfnk
Copy link
Contributor

sbfnk commented Mar 28, 2024

Type of issue:
Proposal for a new vignette

Detail:
People often state that estimation in the package is slow (e.g. https://journals.plos.org/digitalhealth/article/figure?id=10.1371/journal.pdig.0000052.t002). This is to some degree a function of the default choice of model and MCMC algorithm. Depending on the use case choosing a different model (e.g. the nonmechanistic model, or a random walk on Rt) or an approximation (VB/laplace/pathfinder) can address this issue. It would be great to write a vignette that outlines these options and discusses implications on estimates.

@jamesmbaazam
Copy link
Contributor

Great idea. I think the issue of speed has always been a deciding factor for most people and providing explicit guidance on that would help a lot.

Do we want to ship this with the next version release?

@jamesmbaazam
Copy link
Contributor

Can we brainstorm here what the options are? More like "As a user, I only care about result X so I should use option Y if I care about speed."

@sbfnk
Copy link
Contributor Author

sbfnk commented Apr 19, 2024

Do we want to ship this with the next version release?

I think we want to get 1.5.0 out asap as we've already accumulated quite a lot of new functionality which is sitting in development so no, I think this is for later.

@jamesmbaazam jamesmbaazam self-assigned this Apr 30, 2024
@sbfnk
Copy link
Contributor Author

sbfnk commented May 17, 2024

Can we brainstorm here what the options are? More like "As a user, I only care about result X so I should use option Y if I care about speed."

Currently available options are:

  1. if only doing retrospective analysis without much missing data, could use the nonmechanistic model
  2. if happy with approximate results could run with method = "vb" or, if using the cmdstanr backend, methods "pathfinder" or "laplace"
  3. If happy with non-smooth estimates of R could use e.g. a weekly random walk
  4. use more cores and/or generate fewer samples
  5. make the model more efficient (obviously not an option for the unsuspecting user, but something we should always strive to do)

It would be nice to explore these options and impacts on speed and quality of estimates. A little bit of that is happening in https://github.com/epiforecasts/EpiNow2/tree/main/inst/dev/recover-synthetic

@seabbs
Copy link
Contributor

seabbs commented May 17, 2024

That is a good list I think.

Tangent from this issue below.

(obviously not an option for the unsuspecting user, but something we should always strive to do)

If EpiNow2 is not being deprecated any time soon this should have some focus IMO as there are clear performance issues that could speed things up for people.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants