Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Laplace approximation as algorithm #16

Open
wants to merge 8 commits into
base: master
Choose a base branch
from
99 changes: 99 additions & 0 deletions designs/0016-hessian_optimize_cmdstan.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
- Feature Name: Allow Cmdstan to optionally print out draws from Laplace approximation of posterior when optimizing
- Start Date: 2020-03-08
- RFC PR: 16
- Stan Issue:

# Summary
[summary]: #summary

When computing a MAP estimate, the Hessian of the log density can be used to construct a normal approximation to the posterior.

# Motivation
[motivation]: #motivation

I have a Stan model that is too slow to practically sample all the time. Because optimization seem to give reasonable results, it would be nice to have the normal approximation to the posterior to give some sense of the uncertainty in the problem as well.

It is standard to compute a normal approximation to the posterior covariance comes from the inverse of the Hessian of the negative log density.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove "comes" (I would remove it myself, but I don't have edit rights. It would be useful to have edit rights for design-docs repo)


If the MAP estimate is ```u```, and the Hessian of the unnormalized log density at u is ```H```, then a posterior draw on the constrained scale is:

```
unconstrained_sample = multivariate_normal_rng(mean = u, cov = -inverse(H))
constrained_sample = constrain(unconstrained_sample)
```

We can output unnormalized log densities of the actual model and the approximate model to compute importance sampling diagnostics and estimates.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can output unnormalized log densities in the unconstrained space of the true posterior and the approximate posterior


Rstan already supports this via the 'hessian' argument to 'optimizing'.

# Guide-level explanation
[guide-level-explanation]: #guide-level-explanation

This adds two arguments to the cmdstan interface.

```laplace_draws``` - The number of draws to take from the posterior approximation. By default, this is zero, and no laplace approximation is done

```laplace_add_diag``` - A value to add to the diagonal of the hessian approximation to fix small non-singularities (defaulting to zero)

The output is printed after the optimimum.

A model can be called by:
```
./model optimize laplace_draws=100 data file=data.dat
```

or with the diagonal:
```
./model optimize laplace_draws=100 laplace_add_diag=1e-10 data file=data.dat
```

Optimizing output currently looks like:
```
# stan_version_major = 2
...
# refresh = 100 (Default)
lp__,b.1,b.2
3427.64,7.66366,5.33466
```

The new output would look like:

```
# stan_version_major = 2
...
# refresh = 100 (Default)
lp__,b.1,b.2

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add lg__, ie, log density with respect to the approximation? Then it would be easier to use Pareto k diagnostic and do importance resampling.

Copy link
Member Author

@bbbales2 bbbales2 Mar 16, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can evaluate both the true (edit: unnormalized) log density and the log density of the normal approximation. Just want one or both?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both lp__ and lg__ as in CmdStan advi, and in RStan advi and Laplace

3427.64,7.66366,5.33466
# Draws from Laplace approximation:
lp__, log_p, log_g, b.1,b.2
0, -1, -2, 7.66364,5.33463
0, -2, -3, 7.66367,5.33462
```

The lp__, log_p, log_g formatting is intended to mirror the advi output.

# Reference-level explanation
[reference-level-explanation]: #reference-level-explanation

As far as computing the Hessian, because the higher order autodiff doesn't work with the ODE solvers 1D integrator and such, I think we should compute the Hessian with finite differences, and we use the sample finite difference implementation that the test framework does (https://github.com/stan-dev/math/blob/develop/stan/math/prim/functor/finite_diff_hessian_auto.hpp)

Ben Goodrich points out it would be better to get this Hessian using finite differences of first order gradients. He is correct, but the fully finite differenced hessian is what is implemented in Stan math currently and so that is what I'm rolling with.

# Drawbacks
[drawbacks]: #drawbacks

Providing draws instead of the Laplace approximation itself is rather inefficient, but it is the easiest thing to code.

We also have to deal with possible singular Hessians. This is why I also added the laplace_add_diag to overcome these. They'll probably be quite common, especially with the Hessians computed with finite differences.

# Rationale and alternatives
[rationale-and-alternatives]: #rationale-and-alternatives

Another design would be the print the Hessian on the unconstrained space and let users handle the sampling and the parameter transformation. The issue here is there is no good way for users to do these parameter transformations outside of certain interfaces (at least Rstan, maybe PyStan).

Another design would be to print a Hessian on the constrained space and let users handle the sampling. In this case users would also be expected to handle the constraints, and I don't know how that would work practically rejection sampling maybe?)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reading this I realized that it is not mentioned before what the draws are. I assume that the proposed approach is to draw from multivariate normal and then transform to the constrained space? If so it would be good to explicitly write what are the values for the "Draws from the Laplace approximation" in the new output.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will add more.

If uopt is the unconstrained optimum we're doing draws in pseudo-code like:

multivariate_normal(mean = uopt, cov = -inverse(hessian(uopt)))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And then transform to the constrained space?

# Prior art
[prior-art]: #prior-art

Rstan does a version of this already.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This leaves it unclear if the RStan version is different that the design here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I should check that it's actually the same. @bgoodri is this actually the same?