Skip to content

Releases: florencejt/fusilli

Fusilli v1.2.3

01 Feb 12:26
b44e1ee
Compare
Choose a tag to compare

Fusilli v1.2.3 Release Notes

  • Added an option in prepare_fusion_data for users to input their own train and test indices instead of relying on the default random split.
  • Added another option in prepare_fusion_data for users to choose their own K-fold cross-validation split indices.

Fusilli v1.2.2

12 Jan 14:54
Compare
Choose a tag to compare

Fusilli v1.2.2 Release Notes

Another update coming at ya!

Changes:

  • Added MNIST data as the example data for the documentation notebooks instead of random data.
  • Made the number of workers in the PyTorch dataloader customisable by an argument in prepare_fusion_data.
  • Uploaded the JOSS paper ready for submission.
  • Fixed a bug in the AttentionWeightedGNN which wasn't allowing multiclass classification.

Bye for now! 🙌

Fusilli v1.2.1

11 Jan 14:15
Compare
Choose a tag to compare

🧹Cleaning up some errant print statements! 🧼

Fusilli v1.2.0

11 Jan 13:45
7f099db
Compare
Choose a tag to compare

Fusilli v1.2.0 Release Notes

Choose your own metrics!

If you want to evaluate your classification model with AUPRC and precision instead of AUROC and accuracy, there's now an easy way to do that:

  1. Create a list of the metrics you want to use: ["auprc", "precision"]
  2. Input this list into the metrics_list argument to fusilli.train.train_and_save_models

It was possible to calculate your own metrics in your own experiment space before this release by accessing the trained models' validation labels and predictions - this is much simpler.

If you want the default metrics (AUROC and accuracy for classification, R2 and MAE for regression), never fear! Don't specify metrics_list and nothing will change.

It's worth nothing that the first metric in the list will be used as the main metric:

  • It will appear in evaluation figure titles
  • It will be used to rank the fusion models in the figure output from fusilli.eval.ModelComparison

Other changes:

  • A new example notebook was added to show how to train and test a regression model with Fusilli
  • Guidance on customising experimental configurations was moved from the examples to its own page in the documentation

Note: I've realised that the last release probably should have been Fusilli 2.0.0 because the API was changed by renaming a function and changing function inputs. Sorry for any confusion and I'll know for next time!

Fusilli v1.1.0

05 Jan 23:05
5f23273
Compare
Choose a tag to compare

Fusilli v1.1.0 Release Notes

It's time for a FusilliFaceliftTM!
There's been a couple changes this time around - but hopefully all to make the user experience that little bit better.

The most comprehensive change is that now you input your parameters into the data, training, and evaluation functions as is, instead of having to put them in a dictionary first.

So for example, instead of:
params = {'batch_size': 8}
prepare_fusion_data(params)

You would do:

prepare_fusion_data(batch_size=8)

This means that there are more arguments to input overall 😅 but it should be easier to see what Fusilli doing and easier to set up your experiments. 😍

Major Changes

  1. Function Update:
    • Renamed get_data_module to prepare_fusion_data for enhanced clarity.
  2. Parameter Handling:
    • Fusilli now requires separate parameter input instead of a dictionary.
    • This boosts code transparency and simplifies bug tracking.
  3. Specifying input data paths:
    • Input file paths should be put in a dictionary with keys "tabular1", "tabular2", and "image"
    • This is passed into prepare_fusion_data
  4. Handling External Data:
    • To incorporate external data, create a dictionary of file paths akin to the input testing data.
    • Pass these paths into the .from_new_data methods within the evaluation figure classes.
  5. Directory Management:
    • Input/output directory paths are now organized within a dictionary.
    • Keys: "checkpoints", "figures", and “losses”.
    • Training loss figures will be saved in a subdirectory named “losses” within the user-specified figures directory.
  6. Column Name Requirements in Tabular Data:
    • Tabular data must now contain columns named “ID” and “prediction_label” in each row (previously: "study_id" and "pred_label").

Minor Changes

  1. MCVAE Early Stopping:
    • Adjust early stopping criteria in MCVAE using keyword arguments "mcvae_patience" and "mcvae_tolerance" passed to prepare_fusion_data.
  2. Weights and Biases Logging:
    • Enable logging with Weights and Biases using the "wandb_logging" argument in train_and_save_models.

Documentation Update

  • Added an example notebook illustrating how to integrate external data with Fusilli.

For comprehensive guidance and examples, please refer to the updated documentation! 📖✨

Fusilli v1.0.0

30 Nov 13:42
Compare
Choose a tag to compare

Fusilli v1.0.0 Release Notes

We're excited to announce the official release of Fusilli v1.0.0! 🎉

What's New?

  • Multimodal Fusion at Your Fingertips: Fusilli v1.0.0 introduces a comprehensive set of multimodal data fusion methods, offering 23 different fusion models. Dive into a diverse collection of techniques, including graph neural networks, attention mechanisms, variational autoencoders, and more!
  • Enhanced Usability: This release simplifies the process of handling multimodal data for predictive tasks. Seamlessly fuse tabular data with 2D or 3D images to perform tasks like binary classification, multi-class classification, or regression with ease.
  • Documentation Overhaul: Experience a revamped documentation with clear usage examples, detailed descriptions of fusion models, and step-by-step guides on getting started. Explore the vast functionalities Fusilli offers through our updated documentation.

How to Get Started?

Getting started with Fusilli is easy! Visit our documentation for installation instructions, detailed usage guides, and examples. Find the method that best fits your multimodal data fusion needs!

How to Contribute?

Contributions are always welcome! Whether it's bug fixes, new fusion models, or improvements to existing functionalities, your contributions can help enhance the Fusilli library. Check out our contribution guidelines to get involved.

Thank You!

We extend our heartfelt gratitude to the contributors, early adopters, and supporters. Your feedback and support have been invaluable in shaping Fusilli into what it is today.

Download Fusilli v1.0.0 now and start fusing your multimodal data in exciting new ways!

Release to link with Zenodo

15 Nov 14:39
Compare
Choose a tag to compare

Need to create a release to link my repository with Zenodo for a DOI

Added Attention and Activation methods

15 Nov 14:33
Compare
Choose a tag to compare

After the CMIC hackathon 2023, we've added two models based on attention and activation fusion and one model with an attention-weighted GNN.

Pre-hackathon release: ready to be forked

08 Nov 16:39
52200e1
Compare
Choose a tag to compare
v0.0.5

Delete .vscode directory

Shiny README and documentation

03 Nov 17:10
b8c8cae
Compare
Choose a tag to compare
v0.0.4

Update pyproject.toml removing documentation