We want our workshop on AI and Inequalities on 19/11 to be firmly anchored in reality and result in high quality proposals for for EU horizontal AI legislation principles:
- scope of the rules (what ‘AI’ systems should be covered, only top consumer/citizen-facing applications, or other/underlying infrastructure)
- type of obligations (only transparency/documentation requirements?, what should they contain?)
- How to differentiate, depending on scale (i.e. me building system affecting 10 people vs FB unleashing a new content ranking algorithm on a quarter of world population) and risk (system to distinguish which cat video I will be shown vs. decisions on employment, insurance, housing, predictive policing).
So we want to gather/build a number of case studies where specific technological choices have resulted in increasing or decreasing justice.
Some examples include:
-
AlgorithmWatch produces yearly reports looking at automated decision-making in Europe (Jan 2019 report, ‘Automating Society - Taking Stock of Automated Decision-Making in the EU’).
-
UK Centre for Data Ethics and Innovation is also carrying out a review of positive and negative effects of automated decision-making, first interim results just out.
-
Interesting examples could be insurance branch, or the case of Austrian labour authority using it to decide whom of the unemployed to help in finding a new job (criterion: efficiency narrowly defined, i.e. those already have highest chance to find a new job in the first place…).
Do you know of one to add to the list? please post it in a comment below