Workshop on Inequalities in the age of AI, what they are, how they work and what we can do about them - 19/11 - Brussels

timing-wise it’s great as the European Commission will come up with horizontal AI legislation (principles) in early 2020. Not an expert, but I reckon issues they will have to come to grips with could be

  • scope of the rules (what ‘AI’ systems should be covered, only top consumer/citizen-facing applications, or other/underlying infrastructure)
  • type of obligations (only transparency/documentation requirements?, what should they contain?)
  • How to differentiate, depending on scale (i.e. me building system affecting 10 people vs FB unleashing a new content ranking algorithm on a quarter of world population) and risk (system to distinguish which cat video I will be shown vs. decisions on employment, insurance, housing, predictive policing).

Could it be an idea to look at case studies at the event, with these questions in mind? For case studies itself, AlgorithmWatch produces yearly reports looking at automated decision-making in Europe (Jan 2019 report, ‘Automating Society - Taking Stock of Automated Decision-Making in the EU’). UK Centre for Data Ethics and Innovation is also carrying out a review of positive and negative effects of automated decision-making, first interim results just out I think. Interesting examples could be insurance branch, or the case of Austrian labour authority using it to decide whom of the unemployed to help in finding a new job (criterion: efficiency narrowly defined, i.e. those already have highest chance to find a new job in the first place…).

Apologies for lengthy post, but it’s my first here, so please cut me some slack

3 Likes