Skip to content

Commit

Permalink
reediting the post
Browse files Browse the repository at this point in the history
  • Loading branch information
sari-saba-sadiya committed Nov 22, 2024
1 parent a58a645 commit 422a7a5
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions Blog/BlogFiles/April2023_Fairness.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,19 +3,19 @@
April 2024, Tags: Fairness, Introductory Reading

<img src="./Blog/BlogImg/April2023_Fairness.webp" style="width:70%; object-fit: contain;"></img>
<p class="text-sm-center" style="margin:0em 8em 1em; font-size:0.8em">Image from Midjourney with prompt "A man facing the blind justice system"</p>
<p class="text-sm-center" style="margin:0em 8em 1em; font-size:0.8em">Midjourney generated image. Prompt "A man facing a blind justice system"</p>

Recently, the field of AI fairness has exploded, with many conferences adding special tracks and requiring authors make statements regarding the potential social impact of their work. Here we review some philosophical, technical, and practical papers that tackle fairness in machine learning applications.
Recently, the field of AI fairness has exploded (see graph below), with many conferences adding special tracks and requiring societal impact statements from researchers. Here we review some philosophical, technical, and practical papers that tackle fairness in machine learning applications.

<img src="/Blog/BlogOther/Fairness_Pubs.png" style="width:70%;"></img>
<img src="/Blog/BlogOther/Fairness_Pubs.png" style="width:50%;"></img>
<p class="text-sm-center" style="margin:0em 8em 1em; font-size:0.8em">Number of publications focusing on fairness in AI by year, From <a href="https://www.dimensions.ai/">dimensions.ai</a>.</p>

## A Techno-Bureaucratic Complex?
Fashionably late to the party, governments finally began firing their regulatory engines in earnest mid 2022. A fact that is not somewhat worrying considering algorithms like [COMPAS](https://en.wikipedia.org/wiki/COMPAS_(software)) - a software used predict recidivism rates of criminal defendants - have been in use since the mid 2010s, and have remained widely in use despite evidence of discriminatory bias reported in early 2016 (analysis demonstrated that blacks are twice as likely as whites to be labeled a higher risk but not actually re-offend, see [Angwin's excellent ProPoblica article](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing)).

One major new piece of regulation is Biden's [The Whitehouse Blueprint for an AI Bill of Rights](https://www.whitehouse.gov/ostp/ai-bill-of-rights/). While not an enforceable law, this document could (theoretically) heavily shape future legislation. The framework outlined in the document integrates inputs form many ML researchers and stakeholders, and the includes a "From Principles to Practice" section with examples of successful applications of the proposed framework.

The first few pages are important for contextualizing the framework. First, the authors emphasise that the principles outlined in the document *"are not intended to, and do not, prohibit or limit any lawful activity of a government agency, including law enforcement, national security, or intelligence activities"*\[1\]. Second, they note that *Systemic, fair, and just treatment must take into account the status of individuals who belong to underserved communities*. These communities are specifically defined by what U.S. law considers "protected characteristics" (disability, age, race, gender, religion, sexual orientation, maternity, partnership status ...). Considering the document begins with these qualifiers, it is safe to abandon any hope of attempts to rein in the use of algorithms that, for instance, punish poor people for [shopping at Walmart rather than WholeFoods](https://www.nytimes.com/2009/01/31/your-money/credit-and-debit-cards/31money.html) or not having enough time to go to a fancy gym (I recommend the excellent Cathy O'Neil book *Weapons of Math Destruction* for more examples).
The first few pages are important for contextualizing the framework. First, the authors emphasise that the principles outlined in the document *"are not intended to, and do not, prohibit or limit any lawful activity of a government agency, including law enforcement, national security, or intelligence activities"*\[1\]. Second, they note that *Systemic, fair, and just treatment must take into account the status of individuals who belong to underserved communities*. These communities are specifically defined by what U.S. law considers "protected characteristics" (disability, age, race, gender, religion, sexual orientation, maternity, partnership status ...). Considering the document begins with these qualifiers, it is safe to abandon any hope of attempts to rein in the use of algorithms that, for instance, punish poor people for [shopping at Walmart rather than WholeFoods](https://www.nytimes.com/2009/01/31/your-money/credit-and-debit-cards/31money.html) (For concrete exampls, I recommend the excellent Cathy O'Neil book *Weapons of Math Destruction*).

The rest of the document focuses on five points: First that system should be safe and effective, with clear organization levels plans for assessment and testing. Second, there should be built in algorithmic discrimination protection. Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way. <u>Proactive equity assessments</u> should be part of the system design, and protections against <u> proxies for demographic features </u> should be implemented. Third, the document discusses data privacy, specifying that *"only data strictly necessary for the specific context is collected"* and that *"in sensitive domains, your data and related inferences should only be used for necessary functions"*. Clearly, this vague language leaves a wide open door for data misuse, as any data that might help increase in profits could be argued to be necessary [2\]. Fourth, notice and explanation should be given when your data is used in an automated system. Finally, the document emphasises the importance of human alternatives and fallbacks, allowing access to a person instead of only the automated system.

Expand Down

0 comments on commit 422a7a5

Please sign in to comment.