InterFair is our Bias Detection Tool Entry for the #ExpeditionHacks competition on bias in healthcare. Our entry uses a new fair machine learning framework called Fairness Oriented Multiobjective Optimization, or Fomo. Interfair allows any interested healthcare entity (a hospital system, insurance payor, or individual clinic) to feed in an ML model for a given prediction task, measure its performance across intersectional groups of patients, and optimize it with respect to several flexible fairness constraints.
In this repository we provide two scripts that measure bias (measure_disparity.py
) and correct for it (mitigate_disparity.py
).
We also provide a demonstration that uses these scripts to measure and mitigate bias in models that predict risk of emergency department admission.
Our demonstration is based on the recently released MIMIC-IV electronic health record dataset.
pip install -r requirements.txt
To use measure_disparity.py
, you must first create a pandas DataFrame containing outputs from your model on a set of observations, the true labels, and a variable number of demographic columns.
You can then run the following from the command line:
python measure_disparity.py --dataset your_dataset.csv
See the Demo: Measuring Disparity for additional info.
To use mitigate_disparity.py
, you must first have a pandas DataFrame containing observations, the true labels, and a variable number of demographic columns.
You can then run the following from the command line:
python mitigate_disparity.py --dataset your_dataset.csv
See the Demo: Mitigating Disparity for additional info.
Interfair is licensed under BSD 3. See LICENSE.
Team: Willam La Cava and Elle Lett
Team Lead Contact: [email protected]
When they are not competing in hackathons, William and Elle can be found conducting research at the Cava lab, part of the Computational Health Informatics Program at Boston Children's Hospital.