Skip to content

Latest commit

 

History

History
 
 

examples

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

AI Fairness 360 Examples (Tutorials and Demos)

This directory contains a diverse collection of jupyter notebooks that use AI Fairness 360 in various ways. Both tutorials and demos illustrate working code using AIF360. Tutorials provide additional discussion that walks the user through the various steps of the notebook.

Tutorials

The Credit scoring tutorial is the recommended first tutorial to get an understanding for how AIF360 works. It first provides a brief summary of a machine learning workflow and an overview of AIF360. It then demonstrates the use of one fairness metric (mean difference) and one bias mitigation algorithm (optimized preprocessing) in the context of age bias in a credit scoring scenario using the German Credit dataset.

The Gender classification of face images tutorial provides a more comprehensive use case of detecting and mitigating bias in the automatic gender classification of facial images. The tutorial demonstrates the use of AIF360 to study the differential performance of a custom classifier. It uses several fairness metric (statistical parity difference, disparate impact, equal opportunity difference, average odds difference, and Theil index) and the reweighing mitigation algorithm. It works with the UTK dataset

The Medical expenditure tutorial is a comprehensive tutorial demonstrating the interactive exploratory nature of a data scientist detecting and mitigating racial bias in a care management scenario. It uses a variety of fairness metrics (disparate impact, average odds difference, statistical parity difference, equal opportunity difference, and Theil index) and algorithms (reweighing, prejudice remover, and disparate impact remover). It also demonstrates how explanations can be generated for predictions made by models learned with the toolkit using LIME. Data from the Medical Expenditure Panel Survey (2015 and 2016) is used in this tutorial.

Demos

Below is a list of additional notebooks that demonstrate the use of AIF360.

demo_optim_data_preproc.ipynb: demonstrates a generalization of the credit scoring tutorial that shows the full machine learning workflow for the optimized data pre-processing algorithm for bias mitigation on several datasets

demo_adversarial_debiasing.ipynb: demonstrates the use of the adversarial debiasing in-processing algorithm to learn a fair classifier

demo_calibrated_eqodds_postprocessing.ipynb: demonstrates the use of an odds-equalizing post-processing algorithm for bias mitigiation

demo_disparate_impact_remover.ipynb: demonstrates the use of a disparate impact remover pre-processing algorithm for bias mitigiation

demo_json_explainers.ipynb:

demo_lfr.ipynb: demonstrates the use of the learning fair representations algorithm for bias mitigation

demo_lime.ipynb: demonstrates how LIME - Local Interpretable Model-Agnostic Explanations - can be used with models learned with the AIF 360 toolkit to generate explanations for model predictions

demo_reject_option_classification.ipynb: demonstrates the use of the Reject Option Classification (ROC) post-processing algorithm for bias mitigation

demo_reweighing_preproc.ipynb: demonstrates the use of a reweighing pre-processing algorithm for bias mitigation