Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

replay (iterated) LOFI #24

Open
murphyk opened this issue Mar 27, 2023 · 1 comment
Open

replay (iterated) LOFI #24

murphyk opened this issue Mar 27, 2023 · 1 comment
Assignees
Labels

Comments

@murphyk
Copy link
Member

murphyk commented Mar 27, 2023

Another way to avoid overcounting would be to use the buffer for updating the linearization point without updating mu or sigma. For example, at the beginning of step t we have belief state (mu_{t|t-1}, Upsilon_{t|t-1}, W_{t|t-1}) and linearized model \hat{h}{t} based at mu{t-1}. We run lofi as normal through all items in \data_{t-b:t} (a total of b+1 predict-then-update steps), yielding new belief state (mu*, Upsilon*, W*). Then we throw out Upsilon* and W* and define a new linearized model \hat{h}* based at mu*. Finally we do a single update step from (mu_{t|t-1}, Upsilon_{t|t-1}, W_{t|t-1}) using \hat{h}* and \data_t.

cf
Á. F. García-Fernández, L. Svensson, and S. Särkkä, “Iterated Posterior Linearization Smoother,” IEEE Trans. Automat. Contr., vol. 62, no. 4, pp. 2056–2063, Apr. 2017, doi: 10.1109/TAC.2016.2592681. [Online]. Available: https://web.archive.org/web/20200506190022id_/https://research.chalmers.se/publication/249335/file/249335_Fulltext.pdf

@murphyk
Copy link
Member Author

murphyk commented Apr 12, 2023

Sec E.4 of paper

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants