diff --git a/Blog/BlogFiles/April2023_Fairness.md b/Blog/BlogFiles/April2023_Fairness.md index 7e0a6a0..9d205c0 100644 --- a/Blog/BlogFiles/April2023_Fairness.md +++ b/Blog/BlogFiles/April2023_Fairness.md @@ -3,11 +3,11 @@ April 2024, Tags: Fairness, Introductory Reading -
Image from Midjourney with prompt "A man facing the blind justice system"
+Midjourney generated image. Prompt "A man facing a blind justice system"
-Recently, the field of AI fairness has exploded, with many conferences adding special tracks and requiring authors make statements regarding the potential social impact of their work. Here we review some philosophical, technical, and practical papers that tackle fairness in machine learning applications. +Recently, the field of AI fairness has exploded (see graph below), with many conferences adding special tracks and requiring societal impact statements from researchers. Here we review some philosophical, technical, and practical papers that tackle fairness in machine learning applications. - +Number of publications focusing on fairness in AI by year, From dimensions.ai.
## A Techno-Bureaucratic Complex? @@ -15,7 +15,7 @@ Fashionably late to the party, governments finally began firing their regulatory One major new piece of regulation is Biden's [The Whitehouse Blueprint for an AI Bill of Rights](https://www.whitehouse.gov/ostp/ai-bill-of-rights/). While not an enforceable law, this document could (theoretically) heavily shape future legislation. The framework outlined in the document integrates inputs form many ML researchers and stakeholders, and the includes a "From Principles to Practice" section with examples of successful applications of the proposed framework. -The first few pages are important for contextualizing the framework. First, the authors emphasise that the principles outlined in the document *"are not intended to, and do not, prohibit or limit any lawful activity of a government agency, including law enforcement, national security, or intelligence activities"*\[1\]. Second, they note that *Systemic, fair, and just treatment must take into account the status of individuals who belong to underserved communities*. These communities are specifically defined by what U.S. law considers "protected characteristics" (disability, age, race, gender, religion, sexual orientation, maternity, partnership status ...). Considering the document begins with these qualifiers, it is safe to abandon any hope of attempts to rein in the use of algorithms that, for instance, punish poor people for [shopping at Walmart rather than WholeFoods](https://www.nytimes.com/2009/01/31/your-money/credit-and-debit-cards/31money.html) or not having enough time to go to a fancy gym (I recommend the excellent Cathy O'Neil book *Weapons of Math Destruction* for more examples). +The first few pages are important for contextualizing the framework. First, the authors emphasise that the principles outlined in the document *"are not intended to, and do not, prohibit or limit any lawful activity of a government agency, including law enforcement, national security, or intelligence activities"*\[1\]. Second, they note that *Systemic, fair, and just treatment must take into account the status of individuals who belong to underserved communities*. These communities are specifically defined by what U.S. law considers "protected characteristics" (disability, age, race, gender, religion, sexual orientation, maternity, partnership status ...). Considering the document begins with these qualifiers, it is safe to abandon any hope of attempts to rein in the use of algorithms that, for instance, punish poor people for [shopping at Walmart rather than WholeFoods](https://www.nytimes.com/2009/01/31/your-money/credit-and-debit-cards/31money.html) (For concrete exampls, I recommend the excellent Cathy O'Neil book *Weapons of Math Destruction*). The rest of the document focuses on five points: First that system should be safe and effective, with clear organization levels plans for assessment and testing. Second, there should be built in algorithmic discrimination protection. Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way. Proactive equity assessments should be part of the system design, and protections against proxies for demographic features should be implemented. Third, the document discusses data privacy, specifying that *"only data strictly necessary for the specific context is collected"* and that *"in sensitive domains, your data and related inferences should only be used for necessary functions"*. Clearly, this vague language leaves a wide open door for data misuse, as any data that might help increase in profits could be argued to be necessary [2\]. Fourth, notice and explanation should be given when your data is used in an automated system. Finally, the document emphasises the importance of human alternatives and fallbacks, allowing access to a person instead of only the automated system.