-
We don’t know that. The court actually asked the State to clarify how the decision about what neighbourhoods to target is made but - in my opinion - the State failed to provide a clear explanation.
-
Yes, that is exactly the ‘echo-chamber’-effect I was referring to. Chosing citizend randomly would solve that problem, but I guess that would not fit in the governments agenda.
The software used by Uber has similarities with SyRI but is much more intrusive because of the amount of data and the possible consequences for drivers.
On behalf of the drivers, I have filed two requests. The first request regards Uber’s duty of transparency. The main goal is to empower the drivers by giving them access to their data. This allows them – among other things – to united and build collective bargaining power. In doing so, the drivers are supported by the Platform Info Exchange.
The second request concerns the practice of dismissing drivers automatically for suspicion of fraud.
The drivers claim that Uber infringes upon their right not to be subject to automated decisions, unless specific conditions are met.
You mentioned that you challenged SyRI on grounds of privacy violations. Can you explain how exactly SyRI violated privacy (and not just equality)?
Bias may be mitigated with some degree of randomised application, but I’m curious what you @antonekker think about whether individuals’ likelihood of committing welfare fraud can be predicted from data collected in government records. And even if those predictions are “good enough,” is that enough justification for pooling personal data from government agencies?
Congratulations on your recent success, @antonekker. I was wondering if you have any thoughts on what could be proposed in terms of policy at the European level to protect EU citizens against similar infringements of privacy rights by their own governments.
Hi Kate!
Hard to say. Within government, there seems to be a widespread belief that societal problems can be solved through data analysis. Before we went to court, the coalition that started the case initiated talks with the ministry of Social Security. The people we talked to didn’t seem to understand the privacy and other human rights issues. In the past few years, following a number of scandals, public opinion and general awareness have shifted, but I think there is still a strong undercurrent of ‘law and order’ arguments. After the SyRI judgment, members of parliament seem to be more aware of the legal problems. So, bringing cases like this might actually help
Second that. It is a very interesting, and tricky, question.
Hi Kate. There were several challenges. First, it took a lot of effort for the plaintiffs (the privacy organizations) to unite and form a coalition. Fortunately, the Digital Freedom Fund (DFF) provided financial support.
Before going to trial, we attempted to gain access to the algorithm and the ‘indicators’ that were used by the government via a Freedom of Information Request (‘FOIA’). This took a lot of time, mainly because the State didn’t comply with legal deadlines and, in the end, provided little relevant information.
I think the court struggled to understand the technical aspects of SyRI, partly because the Dutch state didn’t provide enough detailed information. However, this probably worked in our favor. The court seemed to be well aware of the societal relevance of the case. In this respect, the ‘amicus brief’, that was drafted by the UN Special Rapporteur on poverty and human rights certainly helped.
Thanks Anton
For those of you who are interested, you can read the Special Rapporteur’s amicus curiae here.
Hi Leonie,
The SyRI legislation infringed privacy rights in several ways. First, there was no clear description of goals and data categories. Also, the Dutch State did not comply with its duty of transparancy.
I guess it depends on the data and on the indicators used. To make good predictions you need reliable data and strong indicators. And that is a big problem if you use data from government databases, because that data is often not reliable. Garbage in = garbage out.
Out of curiosity, what were the core arguments the Dutch government offered for deploying these algorithms? What, in your view, is the best argument the Dutch government made in this context?
The State argued that SyRI was necessary for preventing and combating fraud in the interest of economic welfare. The Court agreed that this is an important interest of society. However, the State failed to demonstrated that SyRI was a proportionate means to reach this goal.
That depends on what we mean by “a good prediction”, @antonekker. If we are happy with being “good” (outperforming randomness) at the aggregate level, we might need very little data. For example, in predicting the outcome of football matches, the simplest model “the home team always wins” does (a little) better than random. Hal Varian (Google’s chief economist) a few years ago went on record saying “if you have 99% correlation, who cares about causation”, or something like that. But this extra performance only applies to predicting a whole lot of football matches (the population), while being useless if you are trying to predict one match in particular.
I think @katejsim worries that prejudices outperform randomness. If you don’t care about fairness and the rights of the individual, you could indeed predict that the poorer neighbors would have more social welfare fraud than rich ones. But this would come at the expense of treating poorer individuals fairly, and, unlike with football matches, it would end up reinforcing the conditions that force those people to apply for welfare in the first place.
Interesting point.
Taking the possible consequences for citizens in account, the predictions should actually be much better than just ‘good’. If 2% procent of the outcomes are wrong, this is already effecting a large number of people.
This raises the question if decisions by government about fraud can ever be left to algorithms alone. Maybe, human interference should be mandatory.
One of SyRI’s stated goals is to find “tax fraud.” Tax fraud, as a category involves every strata of income and society. But did the government only train SyRI’s onto more marginal neighborhoods?
I like this argument. I guess the proponents of systems like SyRI would reply that existing non-algorithmic systems are also bad at predicting fraud (worse, in fact). So, you would need to put a premium on fairness over effectiveness in order to defend the position you had in the SyRI case.
Exactly. It’s not just about the rigor of the predictiveness of the system, but also the immediate and far-reaching consequences of error rates: false positive means that people who need welfare benefits don’t receive it.
Also, hard to hypothesize likelihood of welfare fraud in rich neighborhoods lol. To be fair, SyRI’s scope was for both welfare and tax fraud, but I maintain that the context/scope/impact of tax fraud in poor neighborhoods are incomparable to affluent ones.
Good point. One could ask why SyRI was not used in the richer neighbourhoods. That might actually result in much more fraud being detected. Of course we didn’t make this argument in court. Rich people have a right to privacy too