nadia
October 04, 2019 12:49
24
horizontal AI legislation (principles) in early 2020. Not an expert, but I reckon issues they will have to come to grips with could be
scope of the rules (what ‘AI’ systems should be covered, only top consumer/citizen-facing applications, or other/underlying infrastructure)
type of obligations (only transparency/documentation requirements?, what should they contain?)
How to differentiate, depending on scale (i.e. me building system affecting 10 people vs FB unleashing a new content ranking algorithm on a quarter of world population) and risk (system to distinguish which cat video I will be shown vs. decisions on employment, insurance, housing, predictive policing).
Could it be an idea to look at case studies at the event, with these questions in mind? For case studies itself, AlgorithmWatch produces yearly reports looking at automated decision-making in Europe (Jan 2019 report, ‘Automating Society - Taking Stock of Automated Decision-Making in the EU’). UK Centre for Data Ethics and Innovation is also carrying out a review of positive and negative effects of automated decision-making, first interim results just out I think. Interesting examples could be insurance branch, or the case of Austrian labour authority using it to decide whom of the unemployed to help in finding a new job (criterion: efficiency narrowly defined, i.e. those already have highest chance to find a new job in the first place…).
Apologies for lengthy post, but it’s my first here, so please cut me some slack
Yes I think this makes complete sense. Would you have time to pick a couple of case studies you find especially relevant and post them here?
In parallel what we do via the comms team is to ask the internet, and other participants in the workshop in for suggested case studies that we should include. And to get the conversation going around the three questions/challenges you outlined.
1 Like