I like to think about probabilistic machine learning research, including:
- deep generative models (normalizing flows, diffusion)
- computational statistics (variational inference, MCMC, sampling)
- principled understanding of deep learning (inductive biases, scaling laws, generalization)
- AI4Science (inverse problems)
Currently on my mind:
- Discrete flow matching π
- Running 100s of MCMC chains on GPUs βοΈ
- How to get rid of LaTeX compilation warnings
β οΈ - JAX is cool π