Skip to content

This library allow to compute global sensitivity indices in the context of fairness measurements.

License

Notifications You must be signed in to change notification settings

deel-ai/FairSense

Repository files navigation

logo fairsense logo fairsense

FairSense

This library allow to compute global sensitivity indices in the context of fairness measurements. The paper Fairness seen as Global Sensitivity Analysis bridges the gap between global sensitivity analysis (GSA) and fairness. It states that for each sensitivity analysis, there is a fairness measure, and vice-versa.

@misc{https://doi.org/10.48550/arxiv.2103.04613,
  doi = {10.48550/ARXIV.2103.04613},
  url = {https://arxiv.org/abs/2103.04613},
  author = {Bénesse, Clément and Gamboa, Fabrice and Loubes, Jean-Michel and Boissin, Thibaut},
  keywords = {Statistics Theory (math.ST), Methodology (stat.ME), FOS: Mathematics, FOS: Mathematics, FOS: Computer and information sciences, FOS: Computer and information sciences},
  title = {Fairness seen as Global Sensitivity Analysis},

This library is a toolbox which ease the computation of fairness and GSA indices.

👉 The problem

Each index has it's characteristics: some can be applied on continuous variables and some cannot. Some can handle regression problems and some handle classification problems. Some can handle variable groups and some cannot. Finally some can only be applied on the predictions of a model while some can be applied on the error made by the model.

The objective is then to provide a tool to investigate the fairness of an ML problem by computing the GSA indices while avoiding the aforementioned issues.

🚀 The strategy

The library allows to formulate a fairness problem which is stated as following:

  • a dataset describing the training distribution
  • a model which can be a function or a machine learning model
  • a fairness objective which indicate what should be studied : one can study the intrinsic bias of a dataset, or the bias of the model or the bias of the model's errors

These elements are encapsulated in an object called IndicesInput.

Then it becomes possible to compute GSA indices (in a interchangeable way) using the functions provided in fairsense.indices.

These functions output IndicesOutput objects that encapsulate the values of the indices. These results can finally be visualized with the functions available in the fairsense.visualization module.

💻 install fairsense

‍for users

pip install fairsense

for developpers

After cloning the repository

pip install -e .[dev]

to clean code, at the root of the lib:

black .

for docs

pip install -e .[docs]

and then generate and serve docs:

mkdocs build
mkdocs serve

See the Makefile for more information.

👍 Contributing

Feel free to propose your ideas or come and contribute with us on the Libname toolbox! We have a specific document where we describe in a simple way how to make your first pull request: just here.

👀 See Also

More from the DEEL project:

  • Xplique a Python library exclusively dedicated to explaining neural networks.
  • deel-lip a Python library for training k-Lipschitz neural networks on TF.
  • Influenciae Python toolkit dedicated to computing influence values for the discovery of potentially problematic samples in a dataset.
  • deel-torchlip a Python library for training k-Lipschitz neural networks on PyTorch.
  • DEEL White paper a summary of the DEEL team on the challenges of certifiable AI and the role of data quality, representativity and explainability for this purpose.

🙏 Acknowledgments

DEEL Logo

This project received funding from the French ”Investing for the Future – PIA3” program within the Artificial and Natural Intelligence Toulouse Institute (ANITI). The authors gratefully acknowledge the support of the DEEL project.

🗞️ Citation

If you use fairsense as part of your workflow in a scientific publication, please consider citing the 🗞️ our paper:

    @misc{https://doi.org/10.48550/arxiv.2103.04613,
      doi = {10.48550/ARXIV.2103.04613},
      url = {https://arxiv.org/abs/2103.04613},
      author = {Bénesse, Clément and Gamboa, Fabrice and Loubes, Jean-Michel and Boissin, Thibaut},
      keywords = {Statistics Theory (math.ST), Methodology (stat.ME), FOS: Mathematics, FOS: Mathematics, FOS: Computer and information sciences, FOS: Computer and information sciences},
      title = {Fairness seen as Global Sensitivity Analysis},

📝 License

The package is released under MIT license.

💣 Disclaimer

To the maximum extent permitted by applicable law, authors of FairSense shall not be liable for any kind of tangible and intangible damages. Especially the authors shall not be liable in case of incorrect computation of the indices nor any biased interpretation of such indices.

About

This library allow to compute global sensitivity indices in the context of fairness measurements.

Resources

License

Stars

Watchers

Forks

Packages

No packages published