Tzu-Yuan Lin*1
Minghan Zhu*1
Maani Ghaffari1
*Eqaul Contributions 1University of Michigan, Ann Arbor
An MLP framework that takes Lie algebraic data as inputs and is equivariant to the adjoint representation of the group by construction.
- [07/2024] The initial code is open-sourced. We are still re-organizing the code. We plan to release a cleaner version of the code soon. Feel free to reach out if you have any questions! :)
- [07/2024] We presented our paper at ICML 24!
- We provide docker files in
docker/
. - Detailed tutorial on how to build the docker container can be found in the README in each docker folder.
- All the training codes for experiments are in
experiment/
. - Before training, you'll have to generate the data using Python scripts in
data_gen
. - Empirically, we found out that using a lower learning rate (around
3e-5
) helps the convergence during training. This is likely due to the lack of normalization layers. - When working with
$\mathfrak{so}(3)$ , Lie Neurons specialize to Vector Neurons with an additional bracket nonlinearity and a channel mixing layer. Since the inner product is well-defined on$\mathfrak{so}(3)$ , one can plug in the batch normalization layers proposed in Vector Neurons to improve stability during training.
If you find the work useful, please kindly cite our paper:
@inproceedings{takakura2023approximation,
title={Lie Neurons: Adjoint-Equivariant Neural Networks for Semisimple Lie Algebras},
author={Lin, Tzu-Yuan and Zhu, Minghan and Ghaffari, Maani},
booktitle={International Conference on Machine Learning},
pages={},
year={2024},
organization={PMLR}
}