Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

E.5: systematic effects (consider adding) #147

Open
glipstein opened this issue Jun 6, 2022 · 4 comments
Open

E.5: systematic effects (consider adding) #147

glipstein opened this issue Jun 6, 2022 · 4 comments

Comments

@glipstein
Copy link
Contributor

glipstein commented Jun 6, 2022

Following up from comment thread in PR #140

A few thoughts below, if we think this is worth adding. E1 is related but this feels like a sufficiently different question to consider, especially with everything that's already present in E1.

Possible wording:

E.5 Systematic effects
(could also be, say E.3 and push the other two to 4/5)

Have we considered risks posed by the scale, speed, or rigidity of the deployed model that aren't present in the equivalent human or prototype process (e.g., reinforced outcomes and feedback loops, ability to consider missing variables, societal impacts)?

Possible examples:

I think this example of feedback loop (formerly of "concept drift") actually fits in here; this is a result of using the model, not just a distribution that happens to shift on its own

- text: Sending police officers to areas of high predicted crime skews future training data collection as police are repeatedly sent back to the same neighborhoods regardless of the true crime rate.
  url: https://www.smithsonianmag.com/innovation/artificial-intelligence-is-now-used-predict-crime-is-it-biased-180968337/
- text: -- Related academic study.
  url: https://arxiv.org/abs/1706.09847

curious if folks have thoughts on other examples to represent broader systemic impacts that might not be captured elsewhere in the checklist.
possible example: speed of misinformation spread + twitter (Science article, Twitter's crisis misinformation policy to slow down viral tweets, etc.)

@glipstein
Copy link
Contributor Author

Chatted with Peter, reflecting a couple thoughts to consider:

  • are there clear examples where you could in good faith check the other boxes and miss this? start to drop in so we can evaluate
  • one thought is the police example above, which is a feedback loop resulting from the algorithm (not just general concept drift); if that's not captured, could be worth considering where to add/include

@glipstein
Copy link
Contributor Author

glipstein commented Jul 8, 2022

Relevant paper on speed of algorithmic decision-making; "Decision Time: Normative Dimensions of Algorithmic Speed"
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4078006

Context from article where this was linked:

As Daniel Susser argues in his recent paper, the speed at which automated decisions are reached has normative implications. By incorporating digital technologies in decision-making processes, temporal norms and values that govern them are impacted, disrupting prior norms, re-calibrating balanced trade-offs, or displacing automation’s costs. As Susser suggests, speed is not necessarily bad; however, “using computational tools to speed up (or slow down) certain decisions is not a ‘neutral’ adjustment without further explanations.”

@glipstein
Copy link
Contributor Author

glipstein commented Apr 4, 2023

Another place this is coming up is LLMs (this came to mind as a potential gap from the part of the AFD post on LLMs + deon). One of the clear issues with deployment is the ease and scale of generating content for things like school essays or peer review forums (e.g., as mentioned here) which are not as prevalent as one-off issues of misuse.

That said, I'm not totally sure what the approach would be to address this. cc @jayqi @ejm714 @pjbull if any thoughts while we're on the topic. Especially thinking about deon with generative AI as opposed to traditional predictive algorithms

@glipstein
Copy link
Contributor Author

Adding from @pjbull note on feedback loops in generative AI - chain of thought, risk of signal degradation, e.g., chaos GPT?

It may be that this item focuses more on feedback loops (including reinforced outcomes per first example above), and less on speed/scale

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant