Hi, I’m Anton Ekker and I won the first case in the EU against government use of an algorithmic decision-making system. AMA!

Second that. It is a very interesting, and tricky, question.

1 Like

Hi Kate. There were several challenges. First, it took a lot of effort for the plaintiffs (the privacy organizations) to unite and form a coalition. Fortunately, the Digital Freedom Fund (DFF) provided financial support.

Before going to trial, we attempted to gain access to the algorithm and the ‘indicators’ that were used by the government via a Freedom of Information Request (‘FOIA’). This took a lot of time, mainly because the State didn’t comply with legal deadlines and, in the end, provided little relevant information.

I think the court struggled to understand the technical aspects of SyRI, partly because the Dutch state didn’t provide enough detailed information. However, this probably worked in our favor. The court seemed to be well aware of the societal relevance of the case. In this respect, the ‘amicus brief’, that was drafted by the UN Special Rapporteur on poverty and human rights certainly helped.

2 Likes

Thanks Anton :slight_smile:
For those of you who are interested, you can read the Special Rapporteur’s amicus curiae here.

1 Like

Second @Seamus_Montgomery. Welcome, by the way!

1 Like

Hi Leonie,

The SyRI legislation infringed privacy rights in several ways. First, there was no clear description of goals and data categories. Also, the Dutch State did not comply with its duty of transparancy.

1 Like

I guess it depends on the data and on the indicators used. To make good predictions you need reliable data and strong indicators. And that is a big problem if you use data from government databases, because that data is often not reliable. Garbage in = garbage out.

1 Like

Out of curiosity, what were the core arguments the Dutch government offered for deploying these algorithms? What, in your view, is the best argument the Dutch government made in this context?

1 Like

The State argued that SyRI was necessary for preventing and combating fraud in the interest of economic welfare. The Court agreed that this is an important interest of society. However, the State failed to demonstrated that SyRI was a proportionate means to reach this goal.

2 Likes

That depends on what we mean by “a good prediction”, @antonekker. If we are happy with being “good” (outperforming randomness) at the aggregate level, we might need very little data. For example, in predicting the outcome of football matches, the simplest model “the home team always wins” does (a little) better than random. Hal Varian (Google’s chief economist) a few years ago went on record saying “if you have 99% correlation, who cares about causation”, or something like that. But this extra performance only applies to predicting a whole lot of football matches (the population), while being useless if you are trying to predict one match in particular.

I think @katejsim worries that prejudices outperform randomness. If you don’t care about fairness and the rights of the individual, you could indeed predict that the poorer neighbors would have more social welfare fraud than rich ones. But this would come at the expense of treating poorer individuals fairly, and, unlike with football matches, it would end up reinforcing the conditions that force those people to apply for welfare in the first place.

2 Likes

Interesting point.

Taking the possible consequences for citizens in account, the predictions should actually be much better than just ‘good’. If 2% procent of the outcomes are wrong, this is already effecting a large number of people.

This raises the question if decisions by government about fraud can ever be left to algorithms alone. Maybe, human interference should be mandatory.

3 Likes

One of SyRI’s stated goals is to find “tax fraud.” Tax fraud, as a category involves every strata of income and society. But did the government only train SyRI’s onto more marginal neighborhoods?

3 Likes

I like this argument. I guess the proponents of systems like SyRI would reply that existing non-algorithmic systems are also bad at predicting fraud (worse, in fact). So, you would need to put a premium on fairness over effectiveness in order to defend the position you had in the SyRI case.

Exactly. It’s not just about the rigor of the predictiveness of the system, but also the immediate and far-reaching consequences of error rates: false positive means that people who need welfare benefits don’t receive it.

Also, hard to hypothesize likelihood of welfare fraud in rich neighborhoods lol. To be fair, SyRI’s scope was for both welfare and tax fraud, but I maintain that the context/scope/impact of tax fraud in poor neighborhoods are incomparable to affluent ones.

2 Likes

Good point. One could ask why SyRI was not used in the richer neighbourhoods. That might actually result in much more fraud being detected. Of course we didn’t make this argument in court. Rich people have a right to privacy too :slight_smile:

2 Likes

True, but ‘existing non-algorithmic systems’ basically depend on human decisions. This makes it possible to talk to a human being. Algorithms cannot yet explain their reasoning.

2 Likes

Ok, but if you catch one rich person for tax evasion, that might be equal to ten cases of social security fraud.

2 Likes

That’s what I mean!

Ha. The expected value of catching an offender correlates positively with the income of the people being searched. I guess if you train an AI on the sums claimed back, rather than on the number of offenses, it will zero in on the rich folks. :smiley:

I need to go, but thanks so much Anton for your time. This was very, very interesting, I’ll think about it more.

As I recall, Article 8 also gives some protections from profiling in the form of ‘fishing expeditions’. It was not part of your argument, but in the future when or if they reinstate a more transparent version of SyRI, it could perhaps be a valid argument.

Thanks everybody! I need to go as well. See you later…

1 Like