Were you at the AI & Justice workshop on 19/11? Share your notes and thoughts here!

Hi everyone,

Thank you for a meaningful exchange - it was a pleasure.

Three hours of intense conversation later, it feels like we are just beginning to scratch the surface. So we are going to do three things to help move this discussion forward:

  1. Collect everyone’s notes from the workshop here on the platform so everyone can access them.

  2. Use them to produce a report following the same methodology we used to produce this one (on a different topic): 263421647-Can-networked-communities-steward-public-assets-at-scale.pdf (37.1 MB)

  3. Organise a second event in Brussels soon, maybe even in December.

But we cannot do it without you support!

You can help by: logging into your edgeryders.eu account. And then posting one comment below answering this question:

What did you learn during the event ,and what would you like to further explore with others?


@nvankemenade @Hans @rulassie @paultheyskens @simeon_de_brouwer_86 @mark_19 @irene_palomino_73 @olivier_29 @jt_74 @joris_29 @erin_anzelmo_19 @gennaro_55 @maria_baltag_6 @welf_21 @ekaterina_56 @J_Noga @sedyst @alberto @ilaria @inge @marina @Mirko_M @katejsim @markomanka @matthias @RobvanKranenburg @rtb @sarahs @dleufer @Neel

1 Like

Thanks so much @nadia for organizing such an important event on such a crucial topic. For me the most important takeaways for now (as you said, we’ve just started scratching the surface of something so detrimental) are the following:

  1. we need to increase tech literacy about the systems in place: if we (individuals, governments, policymakers) don’t know how the black boxes work, how do we know that what they promise us is actually what results they give us. How can we question outcomes if we aren’t informed about the manner in which the systems arrive at conclusions?
  2. we also need to understand the tipping point into creating action: when was the last time you really got upset and demanded change? The problem lays again with the fact that we often don’t even know about the injustices (from increased traffic, to the inability to speak to people instead of machines). This topic I think relates to the first point. We can’t be angry to the point of taking action (protests, etc), if we don’t even realize/understand where the injustice is coming from. Side note: we may even want to take it further: should we use certain systems if we don’t know how they work?
  3. If actors say they need certain systems, we need to ask them why. Why not do it by hand? The idea is that these systems will solve everything, but they are just accelerating broken systems.

Looking forward to hear what the other people at the event found most interesting! And looking forward seeing everyone again :slight_smile:

1 Like

We were in different groups, @inge, but we arrived to the same conclusion:

In general, the group felt the need to spend quite some time to get a grip on a shared idea of what AI actually is , and how it works, before he could discuss its regulation. Occasionally, the discussion veered into being quite technical, and not everybody could follow it all the time, even if we had an over-representation of computer scientists and digital activists. This is in itself a result: if this crowd struggles to get it, democratic participation is going to be pretty difficult for the general public.

The whole thing is here:



This is a short text about an evening in Brussels on AI hosted and organized by Edgeryders. I am not going to focus one what was said but on the process of participation. For me this begins with the invitation: “We would love to invite you as a speaker. If you think it would make sense we are happy to also invite for example one of the coauthors you mentioned in the post on the topic (Workshop on Inequalities in the age of AI, what they are, how they work and what we can do about them - 19/11 - Brussels - #40 by anon82932460).” This is a clever invitation. It flatters the invitee a bit, I am invited as a speaker and it is clear that some homework was done. I am being invited because of some content I co-wrote, so it also attempts to bring in significant others as well as pointing me to the fact that I am invited not because of work that I wrote alone but that I co-wrote, in participation. Very hard to resist. So I say yes and I plan my trip to The Netherlands around it. Would I have done this if I was not invited as a speaker? I don’t think so.

So when afterwards it turns out that there are no speakers at all, ‘just’ participants, I have to realize and reflect on this fact that I probably would have missed out on this evening if I had not been invited to speak. Being used to being invited as a speaker I think back to all these meetings I have not joined not being invited as one and so I am left with having to reflect on my own behavior and patterns.

As it turned out, everybody was to be a speaker. Two groups were formed. In the middle of each five chairs are placed. A larger circle is around them. The organizers have invited four people in each group to sort of kick off the discussion. T he fifth chair is for a participant. Interestingly as a personal reflection I was not even invited to ’speak’ as on of the first four, which I found again quite well done and a good lesson in ego management. I actually wonder now if not everyone was invited as a speaker? Two rounds of an hour. The format sparks people bringing ideas and content, not so much hard personal opinions. I decide to listen. Recently I bring pen and paper to Conferences and meetings like this, so I listen better. Otherwise mails keep popping up or I start to Twitter. I hear a lot of good things. After the break into the second round @alberto to taps me on the knee and tells he he is going to call me in. I whisper ‘No’ at first as I am happy ‘just’ listening, but Ok I think I am here and I have story so why not. I go the chair in the middle and after a while I slip out again. Discussion. Nice. Then I make a quick remark from my outer seat chair. Two people grumble a bit. After all the rule is: if you want to talk you have to go the middle chair. I am thinking I am not going to that chair just to say this one line. So I do it again. This time there is an angry reaction and muffled shout: Go to the middle! I am not doing that, but I realize another very significant part of the process. A part that I always knew existed and even plan for, but now the obviousness is palpable as is the tension.

I realize that for me the rule of speak in middle chair is quite arbitrary and actually has been conjured up with all of us in plain view. Yet even if that ‘rule’, that ‘format’ is arbitrary and in existence as temporary as the ‘group’ exists, some people - some intelligence with particular characteristics - is taking this so serious that is has become the next normal and must be adhered to. That is a very logical and normal position. I am not right in my view to immediately subvert an attempt to arbitrary format making. I am entitled to it. I do it. But the right of another not to do that is just as logical sound and morally just. So I am living, being very embodied in this ‘process’ as there is real tension (for a while) a very important participatory design process lesson: ‘design’ the format and design unraveling the format while taking into account the opposition against that and make this realization part of the process for all.

This evening was able to bring it very significant and important questions for my own practice that is very relevant to the co-creation, participation and community management that I do daily. As such the process does reflect on the contents of what was discussed; how can we bring about a greater sense of co design of citizens in the new technologies that seem to emerge quite distinct from other human needs and longings?


@RobvanKranenburg I took the liberty of moving your post onto this topic, which collects ex-post impressions from the event. If you don’t like this, just let me knwo and I will move it right back.

Here are a few notes from my side about content I found notable during the workshop.

Workshop notes

  • Value-based software architecture. Scuttlebutt deprioritized making their software available for multiple devices based on their values of “this being software that is made for people who only have one device”. That’s a major architectural decision, which might not be possible to adjust later without rewriting the whole software. So they really poured their values into their software. In comparison, politics is not yet good at getting its values implemented into technological developments. So we need a better process that implements our values into our technology. It’s about a process strong in accountability.

  • What is human? If an actor (any party / organization) says they are human centric, they often do not even define what “human centric” means in their case. For example, “human” in “European human centric Internet” is left undefined. This generates conflict potential, as it stays so general.

  • The economics behind AI. There’s an interesting study of “the cost of developing a universal chips after the end of Moore’s law”. It means that now that we’re at the end of Moore’s law, the money sunk into developing faster chips is unlikely to come back. So instead the industry took a risk by developing specialized chips. There are two main types of such chips: AI and blockchain. That’s the only reason why AI became a hype and we’re talking about it: it is pushed on us, because industry needed a new profitable outlet for investments, and high levels of capital investment are backing AI already. “If we are not buying it, it’s going to go down. If we are not buying it, we are going to go down.” We are still in the process of making that choice if we (also: the Commission) want to invest money into AI.

  • Good and bad AI architecture. Let’s differentiate between “AI for research and solutions” and “AI for the production of services”. The first type is benign research aimed to solve intricate problems, for example done by universities. The second type is commercial SaaS software that scoops in data out of profit interest of the company. Maybe Google maps might in the future adapt your routing so that you see adverts of parties that paid Google for audience for these adverts.

    This means: the problem of this is about the economics of who runs the datacenters: Amazon and Microsoft and Google built “clouds”, data centers for people to run their applications. Due to economics of scale, they provide the cheapest solution, but also are able to monitor and keep the data going through them. This is an undemocratic process for plain economic reasons, and it’s a hard problem to crack.

  • The structure defines the function. The type of governance structure defines how a new technology gets used. So it may be that we have allowed the wrong governance structures to happen, which will lead to the wrong outcomes of AI technology. In addition, Google and Facebook have been advertising companies but are not anymore – trying to rule them in as advertisers with regulation is already not applicable anymore, instead we should rule in their new structure of “AI first” companies.

  • Is AI anti-human by definition? What AI (as rebranded Bayesian statistics) does is to put the individual differences between human beings (everyone’s “spark of the divine”) into the epsilon, the error term at the end of the equation. That makes AI non-human-centric by design. Because the definition of human for an individual is “that which cannot be predicted by AI, which is not part of the ‘normal’”.

  • On tech interfering with relationships. Intermediating the patient-doctor relationship with data collecting and analyzing systems has degraded the value of that relationship. Because medicine is not about diagnosis, but prognosis: improving a patient’s future condition, and that is a negotiation with the individual, and that individual might be very much non-average, refusing certain treatments etc., and should have and keep a right to that individualness. That still allows for tech systems that could benefit relationships – it’s just that the tech systems we have currently in medicine do not do that.

Personal reflections

I want to clearly distinguish between systems of exploitation and the technology itself. Any system of exploitation, including capitalism, will find and use any new technological option in exploitative ways. The history of digital innovation and media innovation in more general is quite rich in that: before AI for behavior prediction it has been targeted advertising, tracking, mass surveillance programs by governments, weaponized drones, propaganda micro-targeting, mass media use for state propaganda etc…

That’s purely a social problem, not a technological one. Adding one more technology cannot make it much worse … nukes are already around, so what technology could offer significantly worse potential outcomes? So to deal with it, you don’t have to make legislation for AI, but against capitalism.

AI in itself is a nice tool in the toolkit, and easily allows for beneficial use. Personal example: my open source coffee sorter project uses deep learning based image classifiers to relieve small-scale farmers from sorting your coffee by hand.


Thank you again for this amazing workshop last week. It was so amazing in my eyes. And I will commit to see more often my friends Alberto & Noemi at the Reef :slight_smile:

Here are my note, sorry if they seem not so structured
(btw zoliver@inmano = olivier.dierickx@inmano)

AI used to enhance the quality of life, and not be a target to reduce my capacity of life
Reduce the risks of an illness, not be targeted as a group of people to raise my insurance

How to consider Just or Unjust….? What is good or bad?

AI and rape cases, unrich the true experience of people by categorizing them… :o

Participatory art/culture – Nice to break the boxes and free the space where all the people are free to ‘fit-in’ because it’s larger than the boxes that society design for us

Unbias yourself, uncategorize you from Facebook, so harder then for algo to classify me

Collective intelligence

What do I want to get known, or to share, to the algorithm? Can I choose based on my behavior on Facebook?

Add noise to the browser history – Add-on
Norms from the West – AMERICA - are transferred for all around the world. Instagram skin colors
Collective intervention What about people that have to check at dirt rate the content to ban?

Ubdi.com – Universal Income – mydata.org

Decode Barcelona

MyData Operator > the firewall for data for consumers.

Algorithms are feature based. There are no human caracs that could be fitted in features

Second part:

Human centric

Use the anger to organize and respond in an intelligent way

Optimisation algorithms, blocking streets for Ways cars (Ways? Waymo?)

Facebook optimizes not on your interests but on the money you bring with ads

How to upset an AI? :o

Disruptive the distributed system


Are we really? @markomanka was very much doubting that, and remarked that, in general, this account of the AI hype does not seem correct to him.