🐢 Open-Source Evaluation & Testing for ML & LLM systems
-
Updated
Nov 22, 2024 - Python
🐢 Open-Source Evaluation & Testing for ML & LLM systems
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
A Python package to assess and improve fairness of machine learning models.
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!
The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows.
Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰
FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
Fairness Aware Machine Learning. Bias detection and mitigation for datasets and models.
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
Papers and online resources related to machine learning fairness
PyTorch package to train and audit ML models for Individual Fairness
[ACL 2020] Towards Debiasing Sentence Representations
[ICML 2021] Towards Understanding and Mitigating Social Biases in Language Models
👋 Influenciae is a Tensorflow Toolbox for Influence Functions
A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible, Trustworthy, and Human-Centered AI.
Talks & Workshops by the CODAIT team
[ACM 2024] Jurity: Fairness & Evaluation Library
Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
Add a description, image, and links to the fairness-ai topic page so that developers can more easily learn about it.
To associate your repository with the fairness-ai topic, visit your repo's landing page and select "manage topics."