Skip to content

Latest commit

 

History

History
55 lines (45 loc) · 2.93 KB

README.md

File metadata and controls

55 lines (45 loc) · 2.93 KB

Lie Neurons: Adjoint Equivariant Neural Networks for Semi-simple Lie Algebras

Tzu-Yuan Lin*1    Minghan Zhu*1    Maani Ghaffari1   
*Eqaul Contributions   1University of Michigan, Ann Arbor   

About

An MLP framework that takes Lie algebraic data as inputs and is equivariant to the adjoint representation of the group by construction.

front_figure

Modules

modules

Updates

  • [07/2024] The initial code is open-sourced. We are still re-organizing the code. We plan to release a cleaner version of the code soon. Feel free to reach out if you have any questions! :)
  • [07/2024] We presented our paper at ICML 24!

Docker

  • We provide docker files in docker/.
  • Detailed tutorial on how to build the docker container can be found in the README in each docker folder.

Training the Network

  • All the training codes for experiments are in experiment/.
  • Before training, you'll have to generate the data using Python scripts in data_gen.
  • Empirically, we found out that using a lower learning rate (around 3e-5) helps the convergence during training. This is likely due to the lack of normalization layers.
  • When working with $\mathfrak{so}(3)$, Lie Neurons specialize to Vector Neurons with an additional bracket nonlinearity and a channel mixing layer. Since the inner product is well-defined on $\mathfrak{so}(3)$, one can plug in the batch normalization layers proposed in Vector Neurons to improve stability during training.

Citation

If you find the work useful, please kindly cite our paper:

@inproceedings{takakura2023approximation,
  title={Lie Neurons: Adjoint-Equivariant Neural Networks for Semisimple Lie Algebras},
  author={Lin, Tzu-Yuan and Zhu, Minghan and Ghaffari, Maani},
  booktitle={International Conference on Machine Learning},
  pages={},
  year={2024},
  organization={PMLR}
}