Releases: mrapp-ke/MLRL-Boomer
Releases · mrapp-ke/MLRL-Boomer
Version 0.7.0
A major update to the BOOMER algorithm that introduces the following changes:
- L1 regularization can now be used.
- A more space-efficient data structure is now used for the sparse representation of binary predictions.
- The Python API does now allow to access the rules in a model in a programmatic way.
- It is now possible to output certain characteristics of training datasets and rule models.
- Pre-built packages for the Linux platform are now available at PyPI.
- The documentation has vastly been improved.
Version 0.6.2
A bugfix release that solves the following issues:
- Fixes a segmentation fault when a sparse feature matrix should be used for prediction that was introduced in version 0.6.0.
Version 0.6.1
A bugfix release that solves the following issues:
- Fixes a mathematical problem when calculating the quality of potential single-label rules that was introduced in version 0.6.0.
Version 0.6.0
This release comes with changes to the command line API. For brevity and consistency, some parameters and/or their values have been renamed. Moreover, some parameters have been updated to use more reasonable default values. For an updated overview of the available parameters, please refer to the documentation.
A major update to the BOOMER algorithm that introduces the following changes:
- The parameter
--instance-sampling
does now allow to use stratified sampling (stratified-label-wise
andstratified-example-wise
). - The parameter
--holdout
does now allow to use stratified sampling (stratified-label-wise
andstratified-example-wise
). - The parameter
--recalculate-predictions
does now allow to specify whether the predictions of rules should be recalculated on the entire training data, if instance sampling is used. - An additional parameter (
--prediction-format
) that allows to specify whether predictions should be stored using dense or sparse matrices has been added. - The code for the construction of rule heads has been reworked, resulting in minor performance improvements.
- The unnecessary calculation of Hessians is now avoided when used single-label rules for the minimization of a non-decomposable loss function, resulting in a significant performance improvement.
- A programmatic C++ API for configuring algorithms, including the validation of parameters, is now provided.
- A documentation is now available online.