Hi, I’m Anton Ekker and I won the first case in the EU against government use of an algorithmic decision-making system. AMA!

In 2014, the Dutch government introduced a legislation approving the use of a risk scoring algorithm to detect welfare fraud. This system, called System Risk Indication (SyRI), pools together data from various government agencies to calculate the likelihood of committing welfare or tax fraud. The UN Special Rapporteur on Extreme Poverty and Human Rights has described governments’ use of similar systems as “digital welfare states” and condemned them for their lack of transparency and oversight, and discriminatory impacts. In the case of SyRI, we discovered that the system used neighbourhood data to profile against migrant and poor communities.

With a coalition of privacy organisations, we challenged SyRI on grounds of privacy and equality violations. Earlier this year, the Dutch court found SyRI to be unlawful and ordered its immediate halt. You can read more here and here.

This case sets a strong legal precedents for future cases. Unfortunately, the Dutch government recently introduced a new legislation with even more intrusive use of personal data for automated decision-making system–the fight continues!

Join me and others this Wednesday at 6:30pm CET to AMA about automated decision-making systems and their impacts on our human rights.

What is an AMA and how does it work?
AMA, or Ask Me Anything, is an interviewing format popularised via Reddit. In short, you ask me questions, and I answer them live for an hour.

How does it work?
Anyone is welcome to post questions/comments below. On Wednesday, I’ll come back to this thread to start answering questions I find interesting. I will do my best to reply to as many questions as possible, but please note that not all questions/comments will be addressed.

Who can join?
Anyone! As the conversations get going during the hour, you will see multiple threads naturally emerging. There is an open invitation for everyone to contribute–please feel free to reach out to other community members, either on the thread or via DM, to continue the discussion!!

photo credit: Pete Linforth

10 Likes

This is awesome Anton! This is a great victory for all of us.

I’d be curious to know if (and how) you are tying your goals to the Free Software/Open Source movement somehow? Have you maybe written about this somewhere?

2 Likes

Hey @antonekker, thanks for doing this!

At the NGI Forum last year, Nesta hosted a discussion on a potential “NGI Trustmark” for AI. One of the issues brought up by an AI researcher was this:

How do you think this could work in practice? And is a Trustmark a good idea, or is it a waste of effort?

1 Like

Another idea I came up with during that discussion on Trusmarks was quite a radical proposal: That any AI trained on public data or data acquired in a public setting must be released to the public, by EU law. What do you think about that proposal?

1 Like

ping @pbihr !

Awesome, indeed. Welcome, @antonekker, and thank you so much for this. I’ll be there… which means here. :slight_smile:

Promising, but… what does it mean, exactly, to “release an AI”?

I wonder if @yudhanjaya knew about this… right up your street!

To release the input variables and the code that generates the AI. It is, after all, deterministic. This does not have to be the same thing as releasing private data if the data is anonymized and scrambled. By doing so, anyone with the computing power required could test the bias of the AI by feeding it test data.

1 Like

@antonekker: could you explain the mechanism whereby, exactly, SyRI targeted especially the poor? After all, if you are looking at social welfare fraud, most people using social welfare are likely to be poor.

And another question: what was, in your opinion, what made the court decide to order SyRI’s shutdown? Was it opacity? Or lack of fairness? Or what?

It appears that the mere act of using algorithms per se is not the main issue, unless they discriminate, profile unfairly, do not disclose, get permission or otherwise violate the European Convention on Human Rights and/or the GDPR. Algorithms are used to determine someone’s credit worthiness and algorithms get used in crime fighting, and other purposes.

Do you think the government will try to make some adjustments to comply with article 8 and the GDPR and then try to carry on with the program? It seems on reading the ruling that the Court is open to it. (“The court shares the position of the State that those new technological possibilities to prevent and combat fraud should be used.”)

I found this part of the Judgement to be fascinating. It seems to me to be saying that profiling is pretty accurate, but we aren’t sure why since it isn’t based on substantive merits. Thus there are no admissible reasons that justify it. So, it’s bad to profile people, but it works pretty well. But just because you can doesn’t mean you should. This is a fundamental dynamic. It seems like you almost should need a warrant to do it, like wiretapping.

"The term “self-learning” is confusing and misleading: an algorithm does not know and understand reality. There are predictive algorithms which are fairly accurate in predicting the outcome of a court case. However, they do not do so on the basis of the substantive merits of the case. They can therefore not substantiate their predictions in a legally sound manner, while that is required for all legal proceedings for each individual case. (…)

The reverse also applies: the human user of such a self-learning system does not understand why the system concludes that there is a link. An administrative organ that partially bases its actions on such a system is unable to properly justify its actions and to properly substantiate its decisions.”

Your current case is on behalf of Uber drivers who were fired by algorithm decision making that also significantly identifies the reason as “fraud” which triggers deeper problems for the drivers.

Uber is famously bad to its drivers, but what is the outcome you hope to achieve with this case? Are you looking to prevent all machine-generated judgements against individuals? Or is your focus on ensuring fair due process for people who are the recipients of such judgements?

1 Like

What were the biggest challenges in bringing your case to the court? Given how “blackboxed” automated systems can be, was it difficult to explain and convince the court about how they work and how they influence people?

Hi @antonekker! Thank you for sharing your work with us!
According to Algorithm Watch, SyRI wants to find “unlikely citizen profiles” - can you tell us more about how those profiles are generated/ how “likely citizen profiles” are created? Is this based on historical data about previous welfare/tax fraud? And, does SyRI include sociodemographic data that is unrelated to a history of welfare/tax fraud?

SyRI already sounded scary! What concerns you the most about this new tool?

Hi Alberto! Thanks for your interesting question.

The SyRI system was used in ‘SyRI-projects’. These projects were targeted at specific neighborhoods that were considered ‘problem districts’. Therefore, the profiling that took place in SyRI mostly affected groups with a lower socio-economic status and or minority / immigration background.

The court also mentions the ‘Echo-chamber’ effect: irregularities in the targeted areas might reinforce a negative image of its occupants. This might lead to stereotyping.

The Court’s decision to end the use of SyRI was primarily based on the European Convention on Human Rights (ECHR). The use of SyRI was insufficiently transparent and verifiable. Also, it violated a number of privacy principles, such as ‘purpose limitation’ and data minimization. Therefore, the use of SyRI cannot always be considered proportionate and necessary.

2 Likes

John:

Hi John, actually I’m quite sure that the government will try to make adjustments to carry on with similar projects. After the judgment was given, the Dutch State decided not to appeal it. At first, my clients were very surprised. However, shortly after that, the government introduced a new legislative proposal that provides a general framework for SyRI like systems. We call it ‘Super SyRI’.

Under the new law, risk profiling technologies can be introduced in several domains. The law only provides a general framework. Specific requirements will be set in by ministerial decree, which by itself is problematic from a constitutional perspective.

1 Like

On behalf of @CCS:

Where to systems like SyRI originate? Is there a pattern to that? Did it originate w the people subjected to it (government agency) or with others (like digital rights NGOs)? How do we create the necessary ties so that cases not on the radar of tech orgs + wired politicians get addressed as well?

Here I have two further questions (sorry, Anton!).

  1. On the basis of what information were they considered “problem districts”?
  2. I can imagine the following mechanism for unfairly affecting the groups you mention: if you look for irregularities, you tend to find them (with some probability). According to some observers, the American police searches mostly black young men, which results in black young men accounting for a disproportionately high number of positive searches (and also negative ones). Was this what worried the courts? Solution: unleash SyRI on randomly chosen citizens.

Clear, thanks.

2 Likes

Hey there Jeremy!

Currently, there is no connection to the Free Software / Open Source movement. However, certain aspects of algorithmic decision might be addressed in a way that resembles the ‘open source approach’. I’m thinking about standards for how to prevent bias and discrimination, how to assess the impact of algorithms and how to explain the outcomes. Such standards might be assessed and improved within the public domain. There are many different societal contexts and use cases that would have to be addressed, for instance financial sector, automotive, health care, et.

Does that seem plausible to you?

2 Likes