Skip to content

Commit

Permalink
Updated Ai Government Balancing Productivity Accountability
Browse files Browse the repository at this point in the history
  • Loading branch information
pauldykes authored and Siteleaf committed Aug 20, 2024
1 parent 23853f7 commit d56fe8d
Showing 1 changed file with 1 addition and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Explainability refers to the ability to understand and interpret how an AI syste

However, it’s a complex area and research is still underway into explainability techniques. In the recent hype around GenAI, the focus has been much more on demonstrating what it can do, rather than how it does it. For now, public sector projects will need to weigh up the pros and cons of using AI products from vendors (where the models are less likely to be explainable) versus developing their own models. [As the Ada Lovelace Institute suggests](https://www.adalovelaceinstitute.org/policy-briefing/foundation-models-public-sector/), it might also be wise for the government to be ‘fast followers’, “adopting established technologies and practices once they have been tried and tested,” rather than trying to be at the cutting edge.

As important as explainability will be in the harnessing of AI by the public sector, it isn’t a silver bullet. Particularly while techniques are still maturing, explainability will depend on people who are trained in interpreting the explanations of a model’s outputs, based on sufficient knowledge of the context and how the model works.
As important as explainability will be in the adoption of AI by the public sector, it isn’t a silver bullet. Particularly while techniques are still maturing, explainability will depend on people who are trained in interpreting the explanations of a model’s outputs, based on sufficient knowledge of the context and how the model works.

## Beyond the hype, human involvement remains vital

Expand Down

0 comments on commit d56fe8d

Please sign in to comment.