Notes from AI & Justice fishbowls? Next: Edit into article, and get consent from people mentioned to post on their behalf

Summary Notes

Fishbowl 1: Featuring Alberto Cottica, Marco Manca, Seda Gurses (with input from Rob) facilitated by Justin Nogarede

Ad Verbatum Notes from Ilaria and Alberto

The EU’s new Commission is looking to approve a new AI directive, and of course democratic participation is important to ensure a good law. What do you feel necessary from EU to participate?

Alberto: I’m a bit skeptical that legislation is the right instrument to do this kind of things. It’s a blunt instrument, because as of now there is no good understanding of the process from the ethical values (fairly easy to encode in a directive) to the actual technical choices of the scientists and engineers building AI systems. Ex. we like the New Zealand group developing Scuttlebutt: when users asked for cross-device accounts, the core developer replied “I’m not doing that, because we want to serve underprivileged people, and these people do not have multiple devices”. What he was doing was justifying his technical choices in terms of his values. As a socio-technical culture, we are not good at that.

Legislation is not an efficient tool?

Not yet.

Marco: Human centered internet is not defined, that’s why you can’t find a solution. “Human” is literally anyone.

Al: It’s more about the process and accountability

If you look at insurance tech, they’re good at …

Seda: Why are we pushing for AI? The end of Moore’s law. The money we seek in are more unlike to come back. Industry took a risk at specialised chips: AI and blockchain-specific. So, now they need to create a demand for these chips. We should be aware of the vendor-driven nature of this stuff, and think about how to regulate these companies.

Part: there are a lot of people involved in AI now, we’re not in the moment of choice, ‘cause a lot of people are already working on it. Could you give us a definition of what is human centered AI? What kind of tech could give a real improvement in the social field?

Part: All the company are now working in AI. There’s who believe it and go for it no matter what, and there are people who are more aware of dark sides and brings other issues. In the medical field AI is useful to diagnose common diseases. When you get down to it, AI is just statistical models.

Moderator: could you please better define what “investment in AI” is? What should the EU do practically?

Seda: The business model of Big Tech now is based on keeping the code, providing cloud services and collecting data to profile and ultimately influence users. They cross-subsidize their cloud services with their marketing revenue streams. If you want to fight that, you should have public cloud infrastructure, which is a trillion-EUR affair. That’s your investment in human-centric AI infrastructure.

Companies like Google optimise with AI the profilazione of their customers. The difference is if it’s a data centered AI or not. … ? Where do we bring the democratic process into new models?

Part: informed consent is tricky for a company to manage.

Part: In the health care sector, public data have been simply entrusted to large companies: the NHS in the UK chose Deep Mind, the SSN in Italy chose IBM Watson.

PArt: People republic of Walmart, how data gov. enables them to use …? The structure is from biology, human brain: do we actually need AI? I think so. Human centric should be human empowering, not exploiting.
Big tech should be redefined as AI companies, and no longer Internet companies, and regulated accordingly. Also, they claim to be “human centric”, because they build sophisticated models of human behavior; but to me they are not, because they do not empower humans. I agree with the point made before about accountability. The GDPR also enshrines it.

Part: Ecological impact of AI: simply training AI models has huge impact in terms of CO2, it might not be a win-win situation, but a tradeoff. You solve some problems, but at the cost of making climate worse. “Govenment is accountable by default” (mh). But people is generally unwawre of using AI.

Alberto: climate + competition policy = nobody is allowed to grow to much. What is “empowering”? I suspect that AI (ML applied on big data) is inherently NOT human-centric, because its models (for example a recommendation algo) encode each human into a database, and then models you in terms of who you are similar to: for example, a woman between 35 and 45 who speaks Dutch and watches stand-up comedy on Netflix. Everything not standardizable, everything that makes you you, gets pushed to the error term of the model. That is hardly human-centric, because it leads to optimizing systems in terms of abstract “typical humans” or “target groups” or whatever.

Marco: I agree with the antitrust principle. What we consider AI is machine learning, this is the root of our problems, it’s just classical statistics. You create the illusion that value production is shifted to the generation…. ex. if the doctor establish a relation with the patient, the AI has not to invade the relation, but now the doctor has to take notes and patient see less value in it (you look somewhere else and you are centralised…).

Airport before and after Ryanair: before architecture were associated to functions and your experience of travels, room for different functions. Now with the low cost you consider that the experience is less valuable, now is a shopping mall and at the gate it’s temporarily informations about space.

Alberto: Are you saying that, in order to have standardization (and therefore data analysis), we have to “flatten” patients on the dimensions that are comparable across different patients?

Marco: medicine is not diagnosis, but prognosis-based. In medicine, nothing has value if I cannot improve your life. A patient can and should negotiate with the doctor where to go from here; he or she even has a right to die. This is a very poor fit for how ML-on-big-data works.
Moderator: Because we expect thousand of data, we loose the human specific?

Alberto:

Part: New tech are not working on average, but on very personalized experience, so it’s not statistics.

Marco: I just read a review of 37,000 studies of AI in medicine. Of these, only about 100 had enough information on training datasets to do a meta-analysis on. Of these, 24 claimed prospective design (the algorithm had been trained without knowing the real data); of these zero had actually done prospective design.
… I don’t know any real medical problem that prospective AI model can solve.

Seda: We optimize relationships. This is problematic. For example, we have marketing companies working with civil liberties organizations, that tell their clients: we can tell you which of your donors care about which issues, so you can optimize for issues where you get more money.

Optimisation has been the logic as far, but it can go wrong anyway. Traffic is a perfect example. If you look at optimization system, it comes at a cost and if you don’t do internalise the cost it causes bigger damages. Apparently innocent choices as what to optimize for turn out to have huge consequences. When you design traffic, do you optimize for driver safety? Or for pedestrian safety? Of minimize time spent on the road?

SEDA on feedbacks: I know GDPR, it’s regulated anyway. There are serious problems on how GDPR focuses on personal data on individuals.
We talk about automated decisions, but there is a set of criteria … the optimization is not about the single person, but as pbe part of a certain group which operates certain choices in general. So, you expect to empower people, but actually you don’t. Paul Olivier Dehaye is trying to pool data from different workers to see if people are being discriminated. This is called a “data trust”. There is also the DECODE project that is doing something they called “data commons”.

We are now in a mess with the GDPR. Uber, for example, uses it to refuse to give drivers their data, saying this impacts the rights of the riders. This (says Seda) is the consequence of focusing on personal rights when the data are used to optimize over populations.

DATA Trust +

Part: I am in the MyData community. We really like the GDPR per se, but we take issue on how it is enforced. In the US there is a project called Data Transfer, that is setting the standards for personal data in a way that will entrench dominant positions. Pragmatically, the EU should set its own standards for this.

Part (Rob): In Belgium there is a company called It’sMe, who does online identity. They are pushing their data onto Azure, making it GDPR and EIDAS compliant. That changed my thinking, because it showed me as even these sterling silver pieces of European thinking ends up in the Silicon Valley cloud.
We need to break the number-person relationship: when you get born, you receive a number from the state, and the game’s up. I dream of disposable identities, that we set up with the purpose of entering into a relationship, like for example receiving a service. A good analogy is one-time email addresses: you get an email address, you sign up to some online service with it, use it ONCE to receive the email to confirm you do control that email, and then it self destructs.

Governance instead of government. Here’s my take: the kind of society you are looking at is anarcho-communist. The communist part is that infrastructure is fully centralized, like in the 30 glorieuse. Anything that runs onto the infrastructure is fully permissionless. That’s the anarchist part. And finally, the data stays with the people.
Justin sees the Internet as an accelerator of neoliberalism, Rob as a still-young tech that is fundamentally liberating.
Identity is a process, so a “broken” identity in different objects, is not a problem.

Can we think of counterproductive regulation on the Internet? Stuff like the copyright directive?

Justin: let me reframe that: how did we end up with all these things, that used to be a public good, end up being someone’s property?
CAPITALISM values, and measures, commercial transactions, so there is a fundamental, systemic incentive to bring things into the commercial sphere.

Rob: internet changes the ratio of power. We behave like we are in the pre-internet era, because internet is a new technology and it takes longer for us to change behaviours.

Part: In the MyData community we see data being commodified, but we think this is a mistake. Data are relational: my birthdate is also the date when my mother got her first child. A mistake in regulation could be this; create a kind of freehold right on personal data (again the parallel with IPR holds).

the error is to internpret GDPR as individual private data. What is data? It’s a relational thing?

Structure defines content. It has been premeditated that

Alberto: in Helsinki everybody agreed that you can’t sell your data.
Rob thinks that a radical change is coming, want it or not, and that the ONLY thing that can secure a good outcome is very multi-faceted: innovative procurement, plus hardcoded GDPR, plus disposable identities, plus permissionless-friendly data standards for IoT.

Part: In Europe people value privacy, My friends in Malaysia, they don’t think much of agreeing to foregoing privacy rights to get access to some online service. China also is much more top-down than Europe is.

Part: coders and programmers are new élite who understand how these subjects affect society

Kenya + AI?

Justin: new laws will come out next year about AI in EU. What are the best things that we could do? What is the progressive outcome of this process?

Part: teach data skills and data literacy, early in life?

Part: could we maybe stand in the middle, where data can be both used as an economic resource or as a human right.

Part: in NYC there is a situation with an algorithm that does something, but it is proprietary, and so no one exactly knows how it makes the decisions that it does. A good idea is to mandate auditability of algorithms. “Algorithm watch” is an NHO that does this.

Part: is it possible to audit models? It is, though very expensive. So why does it not get done? Because the algos are private property.

Fishbowl 2: Featuring Fabrizio Barca, Kate Sim, Hugi Asgeirsson, Liliana Carillo - facilitated by Nadia E.N

Notes from Fabrizio Barca

Some ideas from good exchanges in a good evening – November 11, 2019

(Fabrizio Barca)

In order to move ahead and to make the most of all ideas and human resources working these days on AI, we need both to understand where potential mobilization is taking place or it might take place, and to have a conceptual framework to move ahead.

  • Since AI can produce either bad effects (harm) or good effects … the appropriate conceptual framework should offer a definition of bad/good so as to

  • assess current uses of AI,

  • mobilize people both “against” bad uses and “for” good uses”

  • design, experiment and revise alternative “good” uses

  • Sen’s concept of Substantial Freedom (A Theory of Justice) is a choice: the capability approach, where we consider “good” all moves/actions that improve the capacity/capability of every person to do what considers valuable (not futile capacities, but central to one’s life: dignity, learning, health, freedom to move, human relations, …)

  • This approach does not incapsulate what is “bad” or “harmful” in a rigid box, but it calls for a participatory/societal assessment of how the general principle should be implemented, an assessment that can be continuously updated through public debate

  • Not so new: it was used by lawyers in the corporate assessment of “just”/”unjust” fiduciary duties and business judgement rule in the US

  • Which reminds us that the participatory process/platforms can be of different kinds:

  • In the judiciary

  • In working councils

  • In town councils a la Barcelona

  • Mixing dialogical/analogical debate

Which are the potential ACTORS of mobilization?

  • Organised Labour : automation, bad jobs, gig economy, low wages
  • Women: AI machism
  • Kids: sexual abuses, porno
  • Citizens: mobility
  • Citizens: health … a dimension of life where people are becoming awareof bad uses (insurance, use of DNA)

Therefore, one should move forward according to the following sequence:

  • Identify, country by country or even places by places, where people’s concern on bad uses (or on forgone good uses) is high and there it is being organised: i.e. where there is a potential demand for “competence support”, both at conceptual and technological level
  • Concentrate on these contexts and provide them with the conceptual framework
  • Turn the “pars destruens” into a “pars costruens”
  • Building at EU level a network that allows horizontal comparison for the same issues both of threats, actions and results
  • Ensuring at EU level the availability of a competence centre that deals (not necessarily solve, but ai least identify and tackles) with the meta-obstacles preventing the implementation or curtailing the survival of“good” uses.

Complementary activities that emerged from debate:

  • Following the “substantial freedom” framework a code of behaviour should be established and monitored, by which all categories playing a role in developing algorithms - scientists developing Algorithms, politicians supporting/calling for their use, public and private administrators using them - should ask themselves: What is the effect on the capabilities of people effected by the algorithms?
  • Analogic/dialogic …. Online/Offline Experimenting platforms combining the two is a priority. In particular, it should be a practice for public administrations and politicians to create dialogic platforms where the above code of behaviour is checked.
  • Education/sensibilization on the implementation of the above conceptual framework in order to identify good/bad uses is a priority. It should first of all engage kids from a very early age: a new saga of cautionary tales … would help.

Ad verbatum notes from Inge and Marina

We all started in the same situation that we are trying to control

AI may be fuzzy concept

Fabrizio Barca:

Public administration, research, now working with the public administrations

If we want to shape the common sense five or ten years from now what will be considered relevant?

Technology leads to bad or good results: Principle of social justice

Amartya Sen’s principle of substantial freedom, a sustainable substantial freedom. We need to have capacity/capability to live the life we want

As worker, I want it to help me to avoid discrimination in subjective interviews, but not to be used to have AI discriminating against me because of previous behavior of “my” group

My freedom should be enhanced

Putting on the table simple methodology that allows us to understand the just and the unjust

This kind of principle is not the one we take out, it’s a methodology, an open-ended principle - it can be filled with content, case by case only through the participatory mechanism.

Different subjective values of different contexts can lead to conflict- case by case through participatory processes is the solution - not just regulation

We slowly develop what we consider to be just

Nadia E.: one entry point is principles, we need general principles about what we consider good or bad. AHow they will look like or work in practice that’s a big open question. Kate Sim has been looking at the tools - what is your experience?

Kate Sims: I look at automation and documentation at violence registration, do universities offer support system in trauma informed interface, your claim will go through it, run it through all other cases, and then match it with all the other cases…

I have problems explaining the problems these problems have. What these systems are doing, they are trying to categorize our experience of misconduct in binary options.

You have to have a clear definition of rape otherwise it doesn’t fit in the system

People are shoe-horned into particular categories, but this doesn’t speak to the richness of the experience you have had.

With criminal justice we see how people are categorized. Wider categories of human improvement are not classified.

This is where the question of civil society is very important, its difficult to understand how AI makes decisions, categorizes us, and we can’t anticipated. We don’t even know where the data comes from and who it is being sold to.

People don’t have the tech literacy to question this. Should citizens have this knowledge?

What harm is and what freedom can be?

Nadia: the conversation started around the time of swedsh elections, we saw the clear threat from microtargeting, external influences…we started having conversations with political parties, that was a disappointment. Wanted to use this for their campaigns.

Drew from the experience of mobilizing people. Wanted to dig into your experiences…

Hugi: I’m more into that if we know what is wrong how we can mobilize people and now we are about what is the thing that we are mobilizing about.

A lot of the time when people are faced with systematic conversation or binary boxing in for example: Employment models aren’t working, you can’t break out of those and you have to work with others these new ideas to making it work.

You can only break through these models and narratives by working together and creating new narratives.

When we know what we want to do and where to go, we can create a new future, something completely different from what the AI is predicting perhaps. Thus create the space that is more free.

Participatory art - the art that throws away the separation between art and the spectator. Because of the consumerist society…the point is not to create “great art” but great transformative experience together with other people. Example of the Borderland. They come into this world, change the mindset, lots of the traditional concepts start to break. Creating completely new reality. Tackling the imposed machine. There is something there but don’t know what.

Liana: The individual needs vs individual capabilities, and the collective needs and capacities. One person may be fooled by a system, like cambridge analytica.

Elections, Cambridge analytica, Facebook

Algorithms like Facebook are pulling us into a certain bias. Gathering data and classifying a person based on a lot of data around. If I am moved from one reality to another, the categorization of me is probably not the same. Even in a certain culture. Behaviour and constant interaction influenced by culture.

If myself am less biased the algorithms are less likely to classify me.

What is something you are struggling with? What is out there to make our own biases less visible to the algorithms?

How conscious I am myself about my own bias.

Kate: it is really easy to respond to this, oh, maybe if I only share part, than that could be a meaningful intervention. Sometimes it can be. Mozilla for example has add ons. We can do that individually.

But maybe the scope of our change should go beyond individual actions.

Content moderation in books by Mary Gray or Sarah Roberts - tracking the work on how tech companies more their content moderation work to India for example, the workers are paid at the low rate and required to impose western standards on what the behaviour on social media should look like. The norms are being translated by the economic models of the platforms

The scale is so much bigger than us.

What are some of the lessons that the civil society has done in the past looking at the ways we can translate this

Hugi: it’s even worse, it’s American. The EU is not a weak player, it doesn’t require the American economy to be stronger than the EU economy.

Participant A: concept of social justice & mobilization

Technological transition is making society more anonymous. Pushes individuals to be more lonely and therefore more vulnerable. A discriminating specific groups. So we are more vulnerable. But I am also sure there are examples of digitization helping mobilization and doing good. As an empowering tool for citizens to matter, like with Edgeryders. It can generate communities

Paul:

Belgian citizen, working for My Data Control (?) based in Finland, looking at the NGI technology

People become the owners and agents over their own data. The Great Hack. Fair use, data economy. An ethical data operations on a global scale, with a decentralized system, fair european data economy, we’re currently colonized - europe only has 3% market share

Data economy there are 4 models: Lanlard: you’re money is worth up to 20.000 euros (??). Three other models, like in barcelona, NGI calls it decode. They consider data to be common good of a city or community. There are 4 different types of these data economy model

Hugi: something as vague as my data became very big, most people understand there is something about data we should care about. We only need this data once, which later becomes governed by something different. The algorithm becomes irrevocable. How do you challenge that? Even if you own your data people will sell it.

Paul: Mymobility, own your data. People are not really aware that they can manage their own data. Energy and transportation has the east sensitive data.

Inform the civil society and create new business model

Nadia: how can we get emotionally engaged into this ai discussion, like extinction rebellion has done with the environment. What is the spark, what was the trigger. How can we get that propelling force within this question of AI. Where are there currently very active pockets of people pushing on topics, not in the mainstream, picking very interesting fights.

Kate: a few examples:

  • a group in LA - Stop LAPD surveillance, automating inequality, ranging from criminal justice, homelessnes…to uncover the ways in which algorithms are used for surveillance. People got together to demand that the police releases surveillance policies. They were able to uncover the full scope, pushing back against the LAPD. Intervention takes a long time and needs a lot of work.

Nadia: The question is what do you want to push against? Where are injustices related coming out of this?

Fabrizzio: this example as many US example, use in judicial and social welfare, like in housing projects who gets a house, based on an algorithm. But people need to have someone recognize them as a human people, But we aren’t recognized as human beings anymore, discrimination grows because the human to human contact is no longer there. But the problem is that is that there are sooo many more things happening that are not implicit. The way labor was exploited in capitalism, enormous change happened. But now this is being copied again. But it isn’t causing an enormous reaction. The unions are weak.

Credit for a household going to banks. Information about our behaviour. Families are asked a high rate, the reduction of social justice. Where does the right-wing voter come from? There is a big job of just making clear what is happening. What happened in the State was evident, but here we lack information, children should understand.

We need to:

  1. Educate, how for bad and for good it can used
  2. Judiciary needs to be brought into the game
  3. We need to bring back people’s wellbeing into it.

It isn’t necessarily about tech, it’s about people. The solution isn’t marketization.

Hugi: in Lebanon and Morocco - riots over something related to the AI. in one case taxing, banning of whatsapp. Because of income? Maybe. But whatsapp also introduced the signal protocol. What it took for people to go to the streets and protest against government was for their rights to be removed (those they didn’t have a few years back).

Nadia: What are new principles for rights that have emerged? Flaws in the tools people are using; the relationship to data - economic exploitation; pushing back against infringements on their digital life. People push back when something happens. Where would you react? When was the last time you got mad when something has happened in your digital life?

Kate: comparison between the EU and the US is very important.

PART 2:

Roberto: don’t know much about tech and AI although really interested. Trying to think of experiences, personal ones. We looked at the relation between the social struggles, justice movements and technology from some specific angles but maybe there are others interesting to explore. The example of whatsapp touches upon our digital freedom but there are also examples of movements that used the tech and even social networks to structure and expand their capacity to mobilize people for a cause. The most debated issues in Europe now is migration and in Brussels we have an interesting example - citizens platform to support the refugees and not documented migrants. In the matter of a few years, those were just people who felt it was unacceptable to leave people sleeping outside without any support or institutional mechanism or police. People felt they could do something, started mobilizing by using Facebook as a main way to connect. It grew very fast, various pages of the movement. Most of these people are actually engaged, besides being active on Facebook, they use it to organize the activities in real life. Now the NGO was created.

Nadia: interesting point of entry because in terms of mobilization and justice, perception of the actions and reactions of justice. We’ve seen the same thing playing out in different places for different reasons. Obvious injustice - the link between the situation and the AI land where this is scaled up. What happens when it scales up?

Fabrizzio: labor, women, kids, citizens, consumers. It can react to a machine, but it is also a reaction to social issues, national bargaining in negotiation of algorithms. People are coming together against the gig economy. The platform is private, its proprietary, they are angry. They know there is misuse.

In the US, if you read the amount of literature by women on gender, there’s an abundance, but within the EU this is not the case.

With regards to kids’ privacy, their are flashes of anger, but no movements.

With users of mobility services, there is some action from citizens. For example in Milan they are now trying to launch a platform for mobility.

On anger, you always have to have a structure. Otherwise it may be used by the wrong people or die out.

Social justice: is this algorithm being used to enhance or reduce the capabilities of humans. On all the different fields.

Putting the structure on the anger. You need to turn it into a request.

Nadia: In NL deliveroo drivers won a court case, but the company is ignoring it. Even enforcement is a challenge.

Participant B: For people to react, in most of the cases people are blind, they don’t know it. People find it normal. They are not aware. And people have the right to know about the data. What is the next step? We should perhaps be aware of the algorithm. This is probably the next step in data protection.

Nadia: Talking about algorithms and making it visible, the companies will say this is our intellectual property, are there examples where the state has gone in and put similar claims?

Participant B: It is always coming back to different interests.

National security question: for health and safety issue

Hugi: maybe five years ago FB opened its graph and it opened up FBs capabilities. I was having a blast with it, was a party trick for me. This is what cambridge analytica used. We were able to be coming our own CIA’s.

It’s not only does it give us more capability, but what do we use these capabilities for. Maybe some have too much responsibilities with it?

Kate: Why are we putting the wellbeing of people behind proprietary systems. There’s an example of care in the states, where due to an algorithm peoples care was reduced from 8 to 4 hours. But they didn;t want to explain how the system worked. But they were violating the rights of these people.

Another state in the US: employment benefits - combining two different data sets, the system wasn’t able to sync the system together so people weren’t able to receive unemployment benefits. Companies will always hide behind the trade secrecy, but why the state considers this should be above human rights? Why do they say they need algorithm?

Two questions:

  1. if actors say they need algorithms: why?
  2. What are the reasons people are unable to act upon algorithmic issues in society

P.B: They should at least disclose something.

Roberto: If we trust private companies with those responsibilities, why should we not give people that responsibilities

Hugi: the problem here is that all of this data exists and all of it sudden was opened. It is important to distinguish it was just the side effect.

Liana: the problem isn’t the algorithm, We have a tool: like a knife, it can be used for cooking in the kitchen or killing someone. Same with algorithms. Algorithms and the engineers cost a lot of money.

Participant C: It’s not only the engineers, it’s the data. It comes back to what do you want to do with the data. To have masscale data you need to have the right systems. I used to work in a data dep for marketing purposes. I know how difficult it is to use the data, because it is seldom up to date, There is one assumption that the information in the system that is right, but it is not, and this is worse. You get even more changes that the decisions are made wrong. If you use masscale data, in the US, a lot of people didn’t realize that their data was used by FB and then they realized it wasn’t correct data, outdated, or just wrong. And there is no reflection in reality, Things change. And people don’t want to be categorized and don’t put in the right information. For all types of reasons, you’ll have the wrong data.

For me what struck me in Belgium recently, there was a discussion about participatory leadership, people from public were selected, but the people were ready to have their data given out for public health for the public good, but not for other reasons. So the question should be: what should we want to use our data for, for public good> In exchange, as a citizen you have the responsibility to give correct data.

Fabrizio: if you perceive the data is being used out of control then you are boycotting it. The results of systematic mistakes, decisions have been taken by the private corporations and when the public sector is using them it is doing for its own purposes. Talking about the problem that is not in algorithm but our pressure is taken out of context, the common discourse has taken the public sector out. Even the illusion of internet as being something out of the control of the state. The state has been wrongly used. Even in the education - the teachers in Italy weren’t given explanations why they are being placed somewhere. But this is not because of the data. The algorithm was badly used.

Participant C: What you are saying is that people aren’t taking their responsibilities. It’s a question of governance and responsibility.

Participant D: The question is about how do we use the artificial intelligence, how do we trust the results, how de we use the results?

Hugi: AI gives advice to human, and human pulls the trigger. Just because we put a human in the middle, the decision won’t be better. We are terrified of being held responsible for the wrong decisions,. So great there is this computer system that is helping us make them, wonderful.

Nadia: we go to the topic of agency. Trying to figure out what is the space of agency available playing field for the individual…there are points in which the human decides and points in which decisions are made for humans. Where is the space in which humans can have agency? How to think about it?

Seda: We are trying to understand new systems and optimization of their functions. What happens when the system causes externalities? optimization systems make population wide decisions. Giving a bunch of signals - example of uber (feedback loops, to maximize their profits). The app sensing cars - getting you out of the traffic. Caused problems in smaller streets. The app wasn’t able to change the algorithm, the municipality can’t do anything. The citizens install roadblocks as a solution. Can we build systems to help people against optimization systems. POTs. Facebook matching people to the advertisers - people are subjected to data getting to them regardless of how they interact on the platform. Agency is very limited because it’s about the whole infrastructure.

Hugi: my friend jokes always, how would a teenage obstructer going to behave, like ana anarchist cookbook for it

Seda: We are not into gaming the system but more protecting. Like with pokemon, it only used to show in whiter neighborhoods, because of the ingress game which it was founded on, which was only used by early adopters. But people gamed it with gaming their GPS etc. But that shouldn’t be the solution. Things are more effective if they belong to you. And now they don’t

I read about this game where you run a zoo, and everybody tried to podify the game, but they always had a few pigs in their zoo they couldn’t sell.

Nadia: if you were to go through the question, what is the set of questions to be asked?

Seda: We can think together, two constraints or examples that are worrying: one is soft bank, even if you complain a lot there is a huge investor, can you as a bigger union get enough leverage against the power capital? The other - company wasn’t able to compete with the big actors in relation to residential area (?)

Hugi: are we not being radical enough? If we put a censor anywhere out in a place, including a car, and that data MUST be completely public. If we make these legislations

Seda: we can maybe make a local app that can sorta compete, but it won’t survive, it doesn’t have the capital, the other has.

Hugi: we have a lot of things like roads with are public, you can’t make a private road and make it public. But for some reason, we don’t have the same idea about data, and why not?

Nadia: child pornography and security and terrorism? These are kind of as nuclear bombs when you put them in the conversation. In all of these situations there is some element that has implications or facilitates something from the counterpart, or could end up with child porn being passed out. The default of parents sharing photos of their children, why is that not seen as potentially facilitating child trafficking etc. the rationale of the conversation changes. What changes how we look at this stuff, what are the potential bad scenarios? What are the horrific scenarios that can come up? There is a limit to imagination to what can happen.

Liana: we have image recognition, we can create images, so if an algorithm is able to shape — it is possible and that is scary for em as a parents

Kate: we already have that, we already have these systems. But we need to ask different questions, we need to ask a fundamental question: why are we buying into these systems and who hold the powers. Individual agency is extremely limited, so we need to look collectively at taking action.

Fabrizio: not talking enough about the consequences. I personally do not feel threatened to automation, but many people are. I don’t believe in any of the numbers that go about. What is really happening is the quality of jobs with reflection on people’s lives, relations, the capacity of algorithms…people are being told one day to another if they will work. This is killing their lives. The unions - representing labor, have a reason to mobilize people. If you are looking for a bomb shell I would study them more, the role of people like us. The unions are not equipped these days, we were told they are not necessary anymore, sometimes conservative. I would invest in capacitation of the union.

Participant D: it’s not unions, because people don’t recognize themselves in ti

Hugi: this was always the case, people don’t recognize themselves in it and start something new, and it starts all over again

Fabrizzio: it’s like the amazon example, they weren’t unionized, but they protested and had demands and got them, and unionized after. But I am stressing that labor is a large area where things come along.

Nadia: pushing it forward, we have a sense of the scale at which things can go badly, and a sense of what that may mean to a physical ability to survive. What does it mean that families are threatened on a Sunday to work.

Hugi: people have abilities, and people are upset, we have network systems, if people would map out all the uber drivers ad ping them, they will do something.

Seda: workers at amazon wanted to a strike, but amazon knew and got 200 extra people, so they could continue., After that they created another warehouse, so it couldn’t be disrupted.

There are going to be local wins for a bit, but it will take a bit for a larger win.

Fabrizzio: The three steps: let them pay taxes; mobility of capital

We are discussing this within these constraints, because we can’t change the whole world, but if there’s some solidarity between the unions — organised labor, if we can give them higher capacity, and then they won’t let them be abused, it is doable. They are very weak technological speaking, there is no negotiation anymore.

Hugi: we are from Sweden, no minimum wage, negotiation between unions, but at some point the legislation was put in place which allowed them to lower that power. We have to make sure that at the EU level these companies can act in this way. We need to find out what those things are.

Nadia: to bring it together, a central point of action is labor duties, and what they need to be able to be equipped with, not only to articulate, but also to coordinate, in a decentralized manner

Hugi: we can take labor out, but we can look at it in general as collectives

Seda: The most of the work in the AI is the migration and race. We shouldn’t go just the labor way. The migrants are getting organized but they don’t have labor unions.

Kate: There is not going to be a single collective that will resolve this.

Nadia: It is an engineering thing as well: making visible how that discrimination or a thing affecting that group is invisible. The exclusion and marginalization are not necessarily visible. There are no special mechanism protecting a certain groups.

Hugi: gender has become more visible in the last ten years because the perception changed. The oppression that comes with capital is something that became visible with the social movement. Things can be made visible.

Participant E: Doing the job of uncovering the “evil” is particularly relevant to mobilize people sensitive to certain topics. Because things started to be uncovered, there are little movements against the mainstream, but they cannot reach big mobilization and become a mainstream. This you can only have through the positive message. We should feel we have a common fight. The issue has to be about values that we want to see and subscribe to. We need to touch upon these feelings that make us feel uneasy, but always accompanied by the feeling where we want to go, empathy and the feeling of sharing, otherwise the mobilization won’t happen.

Nadia: not sure I agree. Hong-Kong as an example, a fight to the end. A driving force…I’m gonna leave it at I don’t know. An organized labor and organizing labor - the point that popped up.

Roberto: Social and environmental conflicts are touching upon our lives, this is not directly labor.

Nadia: It is to sum up how the conversation went - making things visible, being aware that there is an algorithm. This is the first step.

Getting money on snitching others…

The bigger things picking up on this. I would encourage us before we leave to take five minutes, pull out your notes, think of questions. Tomorrow I’ll send you an e-mail and you can contribute with your notes. Calls for action - we will be looking at things that emerged.

@mariaeuler @inge so the next step here is to break this out into separate posts that we then reach out to the individuals for the same process as with the individual interviews. This will shorten down the time needed for each one to get their input on the platform and we fulfill the consent thing.

Fabrizio has already said he cannot be on platform, but he is ok with sharing his notes.

2 Likes

would be good to have this before the weekend @inge as we have follow up meeting with Justin at FEPS and the policy design process where we can maybe actually get this stuff into proposals on table of von der leyen