Skip to content

v1.2.0 - GPU and you

Compare
Choose a tag to compare
@RandomDefaultUser RandomDefaultUser released this 28 Sep 13:54
· 459 commits to develop since this release

New features

  • Production-ready inference options
    • Full inference (from ionic configuration to observables) on either a single GPU or distributed across multiple CPU (multi-GPU support still in development)
    • Access to (volumetric) observables within seconds
  • Fast training speeds due to optimal GPU usage
  • Training on large data sets through improved lazy-loading functionalitites and data shuffling routines
  • Fast hyperparameter optimization through distributed optimizers (optuna) and training-free surrogate metrics (NASWOT/ACSD)
  • Easy-to-use interface through single Parameters object for reproducibolity and modular design
  • Internal caching system for intermediate quantities (e.g. DOS, density, band energy) for improved performance
  • Experimental features for advanced users:
    • MinterPy: Polynomial interpolation based descriptors
    • OpenPMD
    • OF-DFT-MD interface to create initial configurations for ML based sampling

Change notes:

  • Full (serial) GPU inference added
  • MALA now operates on FP32
  • Added functionality for data shuffling
  • Added functionality for cached lazy loading
  • Improved GPU usage during training
  • Added convencience functions, e.g., for ACSD analysis
  • Fixed several bugs across the code
  • Overhauled documentation