This is the repo for the paper, Embed and Emulate: Learning to estimate parameters of dynamical systems with uncertainty quantification.
This github project explores learning emulators for parameter estimation with uncertainty estimation of high-dimensional dynamical systems.
We assume access to a computationally complex simulator that inputs a candidate parameter and outputs a corresponding multichannel time series. Our task is to accurately estimate a range of likely values of the underlying parameters. Standard iterative approaches necessitate running the simulator many times, which is computationally prohibitive. We describe a novel framework for learning feature embeddings of observed dynamics jointly with an emulator that can replace high-cost simulators for parameter estimation. Leveraging a contrastive learning approach, our method exploits intrinsic data properties within and across parameter and trajectory domains.
We propose our method, Embed and Emulate, to jointly learn feature embeddings and the emulator.
Unlike the standard setup, which tries to approxiamte the high-dimensional dynamics and usually requires a pre-defined moment function to measure the closeness of the approximated dynamics to the truth value of the observations, to fit well in our problem, we design our emulator to “emulate’’ the low-dimensional embeddings written in this composite form, instead of high-dimensional dynamics. And our goal is to find parameters that live close to the observations in the embedding space.
We leverage contrastive learning to capture intra-domain structural information to learn meaningful embeddings.
Between the inter-domains of parameter and trajectory, we use CLIP-wise loss to align the metric space of the “emulator” and the embedding network. As shown in the diagram, we define the embeddings of the parameter and its paired trajectory as “positive pairs”, and our goal is to maximize the similarity between “positive” pairs on the diagonal, while minimizing the unmatched “negative” pairs off the diagonal.
We would like to thank Automatic Posterior Transformation for Likelihood-free Inference for open-source code.
We would like to thank Neural Approximate Sufficient Statistics for Implicit Models for open-source code.