AI, Inequalities and Justice: An overview of all documented discussions before, during and after the workshop

Content of this wiki

This is where we aggregate the input that is spread over many different posts on the platform. We are in the process of structuring the content below so we can add a table of contents here. If you have time to help that would be great! Instructions here: Adding a table of contents

Purpose & Objective

The purpose of the workshop was to look into some of the main technologies being developed in the Next Generation Internet debate and explore how AI and Internet Infrastructure impact on indicators of equality and justice.

Outputs

We want to produce two outputs that weave together and package the wisdom we build in a form that is easy to act on and share.

SMART SOUND BITES

for policymakers: a list of bulletpoints which which condenses everything we have learned into 7 one sentence insights + recommendations.

and

A VISUAL REPORT

In the form of a deck of slides that answers these questions:

a. What is happening right now “near” you
b. What it is doing to you right now
c. How will this affect you in the future
d. What can be done about it (how you can influence/control what happens)
e. Who is working on the issues/solutions for better outcomes
f. How you can contribute: as citizen, as regional/local civil servant, as national policymaker, as EU policymaker, as technologist, researcher, as politician (political party), as organised civil society leader etc

Our methodology

Physical events: Group discussions in physical workshops that are organised by members of the community with facilitation and resource support from Edgeryders. We all take good notes and share them with the organisers.

Online discussions: Virtual discussions on our online community forums. This is where participants reconnect after the physical gatherings to reconnect and meet others in the community, access the documentation, online courses, more events, job opportunities etc.

Living handbook: Our platform functions as a kind of expertise directory where participants share their personal and professional experiences around the topics and issues of new internet technologies (incl. AI), Justice or Wellbeing of individuals and communities. Our community journalists help participants to transform their contribution to the online conversation into editorialised content that we then use to build a larger audience for their ideas, connect them around common themes and potential collaborations etc.

Questions that were touched on in the workshop

  1. The EU’s new Commission is looking to approve a new AI directive, and of course democratic participation is important to ensure a good law. What do you feel necessary from EU to participate?

  2. What is “investment in AI”? What should the EU do practically?

  3. Can we think of counterproductive regulation on the Internet? Stuff like the copyright directive? how did we end up with all these things, that used to be a public good, end up being someone’s property?

  4. Do we actually need AI? Why are we pushing for AI?

  5. Where do we bring the democratic process into new models?

  6. If we want to shape the common sense five or ten years from now what will be considered relevant?

  7. What will general principles about what we consider good or bad look like or work in practice?

  8. What is something you are struggling with? What is out there to make our own biases less visible to the algorithms?

  9. How can we get emotionally engaged into the ai discussion? What do you want to push against? Where are injustices related coming out of this?

  10. in terms of mobilization and justice, perception of the actions and reactions of justice. We’ve seen the same thing playing out in different places for different reasons. Obvious injustice - the link between the situation and the AI land where this is scaled up. What happens when it scales up?

  11. Why are we putting the wellbeing of people behind proprietary systems? We have a lot of things like roads with are public, you can’t make a public road and make it private. But for some reason, we don’t have the same idea about data, and why not?

  12. What are the reasons people are unable to act upon algorithmic issues in society?

  13. Where is the space in which humans can have agency?


The Workshop Discussions


1. The EU’s new Commission is looking to approve a new AI directive, and of course democratic participation is important to ensure a good law. What do you feel necessary from EU to participate?


  • Human centered internet is not defined, that’s why you can’t find a solution. “Human” is literally anyone.

  • Why are we pushing for AI? The end of Moore’s law. The money we seek in are more unlike to come back. Industry took a risk at specialised chips: AI and blockchain-specific. So, now they need to create a demand for these chips. We should be aware of the vendor-driven nature of this stuff, and think about how to regulate these companies.

  • The business model of Big Tech now is based on keeping the code, providing cloud services and collecting data to profile and ultimately influence users. They cross-subsidize their cloud services with their marketing revenue streams. If you want to fight that, you should have public cloud infrastructure, which is a trillion-EUR affair. That’s your investment in human-centric AI infrastructure.

  • Companies like Google optimise with AI the profilazione of their customers. The difference is if it’s a data centered AI or not. … ? Where do we bring the democratic process into new models?

  • Are we really at the end of Moore’s Law? This is a claim that is not uncontested

  • In Belgium there is a company called It’sMe, who does online identity. They are pushing their data onto Azure, making it GDPR and EIDAS compliant. That changed my thinking, because it showed me as even these sterling silver pieces of European thinking ends up in the Silicon Valley cloud.

  • We need to break the number-person relationship: when you get born, you receive a number from the state, and the game’s up. I dream of disposable identities, that we set up with the purpose of entering into a relationship, like for example receiving a service. A good analogy is one-time email addresses: you get an email address, you sign up to some online service with it, use it ONCE to receive the email to confirm you do control that email, and then it self destructs.

  • I’m a bit skeptical that legislation is the right instrument to do this kind of things. It’s a blunt instrument, because as of now there is no good understanding of the process from the ethical values (fairly easy to encode in a directive) to the actual technical choices of the scientists and engineers building AI systems. Ex. we like the New Zealand group developing Scuttlebutt: when users asked for cross-device accounts, the core developer replied “I’m not doing that, because we want to serve underprivileged people, and these people do not have multiple devices”. What he was doing was justifying his technical choices in terms of his values. As a socio-technical culture, we are not good at that.


2. What is “investment in AI”? What should the EU do practically?


  • climate + competition policy = nobody is allowed to grow to much. What is “empowering”? I suspect that AI (ML applied on big data) is inherently NOT human-centric, because its models (for example a recommendation algo) encode each human into a database, and then models you in terms of who you are similar to: for example, a woman between 35 and 45 who speaks Dutch and watches stand-up comedy on Netflix. Everything not standardizable, everything that makes you you, gets pushed to the error term of the model. That is hardly human-centric, because it leads to optimizing systems in terms of abstract “typical humans” or “target groups” or whatever.

  • The business model of Big Tech now is based on keeping the code, providing cloud services and collecting data to profile and ultimately influence users. They cross-subsidize their cloud services with their marketing revenue streams. If you want to fight that, you should have public cloud infrastructure, which is a trillion-EUR affair. That’s your investment in human-centric AI infrastructure.

  • Companies like Google optimise with AI the profilazione of their customers. The difference is if it’s a data centered AI or not. … ? Where do we bring the democratic process into new models?

  • Governance instead of government. Here’s my take: the kind of society you are looking at is anarcho-communist. The communist part is that infrastructure is fully centralized, like in the 30 glorieuse. Anything that runs onto the infrastructure is fully permissionless. That’s the anarchist part. And finally, the data stays with the people.

  • I agree with the antitrust principle. What we consider AI is machine learning, this is the root of our problems, it’s just classical statistics. You create the illusion that value production is shifted to the generation…. ex. if the doctor establish a relation with the patient, the AI has not to invade the relation, but now the doctor has to take notes and patient see less value in it (you look somewhere else and you are centralised…).

  • Airport before and after Ryanair: before architecture were associated to functions and your experience of travels, room for different functions. Now with the low cost you consider that the experience is less valuable, now is a shopping mall and at the gate it’s temporarily informations about space.

  • medicine is not diagnosis, but prognosis-based. In medicine, nothing has value if I cannot improve your life. A patient can and should negotiate with the doctor where to go from here; he or she even has a right to die. This is a very poor fit for how ML-on-big-data works.


3. Can we think of counterproductive regulation on the Internet? Stuff like the copyright directive? how did we end up with all these things, that used to be a public good, end up being someone’s property?


  • internet changes the ratio of power. We behave like we are in the pre-internet era, because internet is a new technology and it takes longer for us to change behaviours.

  • Airport before and after Ryanair: before architecture were associated to functions and your experience of travels, room for different functions. Now with the low cost you consider that the experience is less valuable, now is a shopping mall and at the gate it’s temporarily informations about space.

  • in Helsinki everybody agreed that you can’t sell your data.

  • We optimize relationships. This is problematic. For example, we have marketing companies working with civil liberties organizations, that tell their clients: we can tell you which of your donors care about which issues, so you can optimize for issues where you get more money.

  • Optimisation has been the logic as far, but it can go wrong anyway. Traffic is a perfect example. If you look at optimization system, it comes at a cost and if you don’t do internalise the cost it causes bigger damages. Apparently innocent choices as what to optimize for turn out to have huge consequences. When you design traffic, do you optimize for driver safety? Or for pedestrian safety? Of minimize time spent on the road?

  • on feedbacks: I know GDPR, it’s regulated anyway. There are serious problems on how GDPR focuses on personal data on individuals.

  • We talk about automated decisions, but there is a set of criteria … the optimization is not about the single person, but as pbe part of a certain group which operates certain choices in general. So, you expect to empower people, but actually you don’t. Paul Olivier Dehaye is trying to pool data from different workers to see if people are being discriminated. This is called a “data trust”. There is also the DECODE project that is doing something they called “data commons”.

  • We are now in a mess with the GDPR. Uber, for example, uses it to refuse to give drivers their data, saying this impacts the rights of the riders. This (says Seda) is the consequence of focusing on personal rights when the data are used to optimize over populations.


4. Do we actually need AI? Why are we pushing for AI?


  • I just read a review of 37,000 studies of AI in medicine. Of these, only about 100 had enough information on training datasets to do a meta-analysis on. Of these, 24 claimed prospective design (the algorithm had been trained without knowing the real data); of these zero had actually done prospective design.

  • … I don’t know any real medical problem that prospective AI model can solve.

  • medicine is not diagnosis, but prognosis-based. In medicine, nothing has value if I cannot improve your life. A patient can and should negotiate with the doctor where to go from here; he or she even has a right to die. This is a very poor fit for how ML-on-big-data works.

  • Why are we pushing for AI? The end of Moore’s law. The money we seek in are more unlike to come back. Industry took a risk at specialised chips: AI and blockchain-specific. So, now they need to create a demand for these chips. We should be aware of the vendor-driven nature of this stuff, and think about how to regulate these companies.

  • The business model of Big Tech now is based on keeping the code, providing cloud services and collecting data to profile and ultimately influence users. They cross-subsidize their cloud services with their marketing revenue streams. If you want to fight that, you should have public cloud infrastructure, which is a trillion-EUR affair. That’s your investment in human-centric AI infrastructure.

  • Companies like Google optimise with AI the profilazione of their customers. The difference is if it’s a data centered AI or not. … ? Where do we bring the democratic process into new models?


5. Where do we bring the democratic process into new models?


6. If we want to shape the common sense five or ten years from now what will be considered relevant? What will general principles about what we consider good or bad look like or work in practice?


  • I want to clearly distinguish between systems of exploitation and the technology itself. Any system of exploitation, including capitalism, will find and use any new technological option in exploitative ways. The history of digital innovation and media innovation in more general is quite rich in that: before AI for behavior prediction it has been targeted advertising, tracking, mass surveillance programs by governments, weaponized drones, propaganda micro-targeting, mass media use for state propaganda etc…

  • That’s purely a social problem, not a technological one. Adding one more technology cannot make it much worse … nukes are already around, so what technology could offer significantly worse potential outcomes? So to deal with it, you don’t have to make legislation for AI, but against capitalism.

  • AI in itself is a nice tool in the toolkit, and easily allows for beneficial use. Personal example: my open source coffee sorter project uses deep learning based image classifiers to relieve small-scale farmers from sorting your coffee by hand.

  • Technology leads to bad or good results: Principle of social justice

  • Amartya Sen’s principle of substantial freedom, a sustainable substantial freedom. We need to have capacity/capability to live the life we want

  • As worker, I want it to help me to avoid discrimination in subjective interviews, but not to be used to have AI discriminating against me because of previous behavior of “my” group

  • My freedom should be enhanced

  • Putting on the table simple methodology that allows us to understand the just and the unjust

  • This kind of principle is not the one we take out, it’s a methodology, an open-ended principle - it can be filled with content, case by case only through the participatory mechanism.

  • Different subjective values of different contexts can lead to conflict- case by case through participatory processes is the solution - not just regulation

  • We slowly develop what we consider to be just

  • People become the owners and agents over their own data. The Great Hack. Fair use, data economy. An ethical data operations on a global scale, with a decentralized system, fair european data economy, we’re currently colonized - europe only has 3% market share

  • Data economy there are 4 models: Lanlard: you’re money is worth up to 20.000 euros (??). Three other models, like in barcelona, NGI calls it decode. They consider data to be common good of a city or community. There are 4 different types of these data economy model

  • Mymobility, own your data. People are not really aware that they can manage their own data. Energy and transportation has the east sensitive data.

  • Inform the civil society and create new business model

  • I look at automation and documentation at violence registration, do universities offer support system in trauma informed interface, your claim will go through it, run it through all other cases, and then match it with all the other cases…

  • I have problems explaining the problems these problems have. What these systems are doing, they are trying to categorize our experience of misconduct in binary options. You have to have a clear definition of rape otherwise it doesn’t fit in the system People are shoe-horned into particular categories, but this doesn’t speak to the richness of the experience you have had.

  • With criminal justice we see how people are categorized. Wider categories of human improvement are not classified.

  • This is where the question of civil society is very important, its difficult to understand how AI makes decisions, categorizes us, and we can’t anticipated. We don’t even know where the data comes from and who it is being sold to.

  • People don’t have the tech literacy to question this. Should citizens have this knowledge?

  • What harm is and what freedom can be?

  • The individual needs vs individual capabilities, and the collective needs and capacities. One person may be fooled by a system, like cambridge analytica.

  • I’m more into that if we know what is wrong, how we can mobilize people and now we are about what is the thing that we are mobilizing about?

  • A lot of the time when people are faced with systematic conversation or binary boxing in for example: Employment models aren’t working, you can’t break out of those and you have to work with others these new ideas to making it work.

  • You can only break through these models and narratives by working together and creating new narratives.

  • When we know what we want to do and where to go, we can create a new future, something completely different from what the AI is predicting perhaps. Thus create the space that is more free.

  • Participatory art - the art that throws away the separation between art and the spectator. Because of the consumerist society…the point is not to create “great art” but great transformative experience together with other people. Example of the Borderland. They come into this world, change the mindset, lots of the traditional concepts start to break. Creating completely new reality. Tackling the imposed machine. There is something there but don’t know what.


7. What will general principles about what we consider good or bad look like or work in practice?


  • Following the “substantial freedom” framework a code of behaviour should be established and monitored, by which all categories playing a role in developing algorithms - scientists developing Algorithms, politicians supporting/calling for their use, public and private administrators using them - should ask themselves: What is the effect on the capabilities of people effected by the algorithms?

  • Analogic/dialogic …. Online/Offline Experimenting platforms combining the two is a priority. In particular, it should be a practice for public administrations and politicians to create dialogic platforms where the above code of behaviour is checked.

  • Education/sensibilization on the implementation of the above conceptual framework in order to identify good/bad uses is a priority. It should first of all engage kids from a very early age: a new saga of cautionary tales … would help.

  • Since AI can produce either bad effects (harm) or good effects … the appropriate conceptual framework should offer a definition of bad/good so as to

    • assess current uses of AI,
    • mobilize people both “against” bad uses and “for” good uses”
    • design, experiment and revise alternative “good” uses
  • Sen’s concept of Substantial Freedom (A Theory of Justice) is a choice: the capability approach, where we consider “good” all moves/actions that improve the capacity/capability of every person to do what considers valuable (not futile capacities, but central to one’s life: dignity, learning, health, freedom to move, human relations, …)

  • This approach does not incapsulate what is “bad” or “harmful” in a rigid box, but it calls for a participatory/societal assessment of how the general principle should be implemented, an assessment that can be continuously updated through public debate

  • Not so new: it was used by lawyers in the corporate assessment of “just”/”unjust” fiduciary duties and business judgement rule in the US

  • Which reminds us that the participatory process/platforms can be of different kinds:

  • In the judiciary

  • In working councils

  • In town councils a la Barcelona

  • Mixing dialogical/analogical debate

  • I look at automation and documentation at violence registration, do universities offer support system in trauma informed interface, your claim will go through it, run it through all other cases, and then match it with all the other cases…

  • I have problems explaining the problems these problems have. What these systems are doing, they are trying to categorize our experience of misconduct in binary options.

  • You have to have a clear definition of rape otherwise it doesn’t fit in the system

  • People are shoe-horned into particular categories, but this doesn’t speak to the richness of the experience you have had.

  • With criminal justice we see how people are categorized. Wider categories of human improvement are not classified.

  • This is where the question of civil society is very important, its difficult to understand how AI makes decisions, categorizes us, and we can’t anticipated. We don’t even know where the data comes from and who it is being sold to.

  • People don’t have the tech literacy to question this. Should citizens have this knowledge?

  • What harm is and what freedom can be?

  • the problem here is that all of this data exists and all of it sudden was opened. It is important to distinguish it was just the side effect.

  • maybe five years ago FB opened its graph and it opened up FBs capabilities. I was having a blast with it, was a party trick for me. This is what cambridge analytica used. We were able to be coming our own CIA’s.

  • It’s not only does it give us more capability, but what do we use these capabilities for. Maybe some have too much responsibilities with it?


8. What is something you are struggling with? What is out there to make our own biases less visible to the algorithms?


  • The individual needs vs individual capabilities, and the collective needs and capacities. One person may be fooled by a system, like cambridge analytica.

  • Elections, Cambridge analytica, Facebook

  • Algorithms like Facebook are pulling us into a certain bias. Gathering data and classifying a person based on a lot of data around. If I am moved from one reality to another, the categorization of me is probably not the same. Even in a certain culture. Behaviour and constant interaction influenced by culture.

  • If myself am less biased the algorithms are less likely to classify me.

  • How conscious I am myself about my own bias.

  • It is really easy to respond to this, oh, maybe if I only share part, than that could be a meaningful intervention. Sometimes it can be. Mozilla for example has add ons. We can do that individually.

  • But maybe the scope of our change should go beyond individual actions.

  • Content moderation in books by Mary Gray or Sarah Roberts - tracking the work on how tech companies more their content moderation work to India for example, the workers are paid at the low rate and required to impose western standards on what the behaviour on social media should look like. The norms are being translated by the economic models of the platforms

  • The scale is so much bigger than us.

  • What are some of the lessons that the civil society has done in the past looking at the ways we can translate this


9. How can we get emotionally engaged into the ai discussion? What do you want to push against? Where are injustices related coming out of this?


  • a group in LA - Stop LAPD surveillance, automating inequality, ranging from criminal justice, homelessnes…to uncover the ways in which algorithms are used for surveillance. People got together to demand that the police releases surveillance policies. They were able to uncover the full scope, pushing back against the LAPD. Intervention takes a long time and needs a lot of work.

  • the problem isn’t the algorithm, We have a tool: like a knife, it can be used for cooking in the kitchen or killing someone. Same with algorithms. Algorithms and the engineers cost a lot of money.

  • we have image recognition, we can create images, so if an algorithm is able to shape — it is possible and that is scary for em as a parents

  • Use of AI in judicial and social welfare, like in housing projects who gets a house, based on an algorithm. But people need to have someone recognize them as a human people, But we aren’t recognized as human beings anymore, discrimination grows because the human to human contact is no longer there. But the problem is that is that there are sooo many more things happening that are not implicit. The way labor was exploited in capitalism, enormous change happened. But now this is being copied again. But it isn’t causing an enormous reaction. The unions are weak.

  • Credit for a household going to banks. Information about our behaviour. Families are asked a high rate, the reduction of social justice. Where does the right-wing voter come from? There is a big job of just making clear what is happening. What happened in the State was evident, but here we lack information, children should understand.

  • We need to:

  • Educate, how for bad and for good it can used

  • Judiciary needs to be brought into the game

  • We need to bring back people’s wellbeing into it.

  • It isn’t necessarily about tech, it’s about people. The solution isn’t marketization


10. In terms of mobilization and justice, perception of the actions and reactions of justice. We’ve seen the same thing playing out in different places for different reasons. Obvious injustice - the link between the situation and the AI land where this is scaled up. What happens when it scales up?


  • labor, women, kids, citizens, consumers. It can react to a machine, but it is also a reaction to social issues, national bargaining in negotiation of algorithms. People are coming together against the gig economy. The platform is private, its proprietary, they are angry. They know there is misuse.

  • In the US, if you read the amount of literature by women on gender, there’s an abundance, but within the EU this is not the case.

  • With regards to kids’ privacy, their are flashes of anger, but no movements.

  • With users of mobility services, there is some action from citizens. For example in Milan they are now trying to launch a platform for mobility.

  • On anger, you always have to have a structure. Otherwise it may be used by the wrong people or die out.

  • Social justice: is this algorithm being used to enhance or reduce the capabilities of humans. On all the different fields.

  • Putting the structure on the anger. You need to turn it into a request.

  • We’re not talking enough about the consequences. I personally do not feel threatened to automation, but many people are. I don’t believe in any of the numbers that go about. What is really happening is the quality of jobs with reflection on people’s lives, relations, the capacity of algorithms…people are being told one day to another if they will work. This is killing their lives. The unions - representing labor, have a reason to mobilize people. If you are looking for a bomb shell I would study them more, the role of people like us. The unions are not equipped these days, we were told they are not necessary anymore, sometimes conservative. I would invest in capacitation of the union.

  • it’s like the amazon example, they weren’t unionized, but they protested and had demands and got them, and unionized after. But I am stressing that labor is a large area where things come along.

  • We are trying to understand new systems and optimization of their functions. What happens when the system causes externalities? optimization systems make population wide decisions. Giving a bunch of signals - example of uber (feedback loops, to maximize their profits). The app sensing cars - getting you out of the traffic. Caused problems in smaller streets. The app wasn’t able to change the algorithm, the municipality can’t do anything. The citizens install roadblocks as a solution. Can we build systems to help people against optimization systems. POTs. Facebook matching people to the advertisers - people are subjected to data getting to them regardless of how they interact on the platform. Agency is very limited because it’s about the whole infrastructure.

  • workers at amazon wanted to a strike, but amazon knew and got 200 extra people, so they could continue., After that they created another warehouse, so it couldn’t be disrupted.

  • There are going to be local wins for a bit, but it will take a bit for a larger win.

  • The most of the work in the AI is the migration and race. We shouldn’t go just the labor way. The migrants are getting organized but they don’t have labor unions.

  • in Lebanon and Morocco - riots over something related to the AI. in one case taxing, banning of whatsapp. Because of income? Maybe. But whatsapp also introduced the signal protocol. What it took for people to go to the streets and protest against government was for their rights to be removed (those they didn’t have a few years back).

  • People have abilities, and people are upset, we have network systems, if people would map out all the uber drivers ad ping them, they will do something.

  • There’s an example of care in the states, where due to an algorithm peoples care was reduced from 8 to 4 hours. But they didn;t want to explain how the system worked. But they were violating the rights of these people.

  • Another state in the US: employment benefits - combining two different data sets, the system wasn’t able to sync the system together so people weren’t able to receive unemployment benefits. Companies will always hide behind the trade secrecy, but why the state considers this should be above human rights?

  • Justin had a few examples we should add here


11. Why are we putting the wellbeing of people behind proprietary systems? We have a lot of things like roads with are public, you can’t make a public road and make it private. But for some reason, we don’t have the same idea about data, and why not?


  • if you perceive the data is being used out of control then you are boycotting it. The results of systematic mistakes, decisions have been taken by the private corporations and when the public sector is using them it is doing for its own purposes. Talking about the problem that is not in algorithm but our pressure is taken out of context, the common discourse has taken the public sector out. Even the illusion of internet as being something out of the control of the state. The state has been wrongly used. Even in the education - the teachers in Italy weren’t given explanations why they are being placed somewhere. But this is not because of the data. The algorithm was badly used.

  • are we not being radical enough? If we put a censor anywhere out in a place, including a car, and that data MUST be completely public. If we make these legislations

  • we are from Sweden, no minimum wage, negotiation between unions, but at some point the legislation was put in place which allowed them to lower that power. We have to make sure that at the EU level these companies can act in this way. We need to find out what those things are.

  • gender has become more visible in the last ten years because the perception changed. The oppression that comes with capital is something that became visible with the social movement. Things can be made visible.

  • There’s an example of care in the states, where due to an algorithm peoples care was reduced from 8 to 4 hours. But they didn;t want to explain how the system worked. But they were violating the rights of these people.

  • Another state in the US: employment benefits - combining two different data sets, the system wasn’t able to sync the system together so people weren’t able to receive unemployment benefits. Companies will always hide behind the trade secrecy, but why the state considers this should be above human rights?


12. **What are the reasons people are unable to act upon algorithmic issues in society?


  • something as vague as my data became very big, most people understand there is something about data we should care about. We only need this data once, which later becomes governed by something different. The algorithm becomes irrevocable. How do you challenge that? Even if you own your data people will sell it.

13. Where is the space in which humans can have agency over these issues?


  • The three steps: let them pay taxes; mobility of capital

  • We are discussing this within these constraints, because we can’t change the whole world, but if there’s some solidarity between the unions — organised labor, if we can give them higher capacity, and then they won’t let them be abused, it is doable. They are very weak technological speaking, there is no negotiation anymore.

  • In order to move ahead and to make the most of all ideas and human resources working these days on AI, we need both to understand where potential mobilization is taking place or it might take place, and to have a conceptual framework to move ahead.

    • Since AI can produce either bad effects (harm) or good effects … the appropriate conceptual framework should offer a definition of bad/good so as to
    • assess current uses of AI,
    • mobilize people both “against” bad uses and “for” good uses”
    • design, experiment and revise alternative “good” uses
  • Sen’s concept of Substantial Freedom (A Theory of Justice) is a choice: the capability approach, where we consider “good” all moves/actions that improve the capacity/capability of every person to do what considers valuable (not futile capacities, but central to one’s life: dignity, learning, health, freedom to move, human relations, …)

  • This approach does not incapsulate what is “bad” or “harmful” in a rigid box, but it calls for a participatory/societal assessment of how the general principle should be implemented, an assessment that can be continuously updated through public debate

  • Not so new: it was used by lawyers in the corporate assessment of “just”/”unjust” fiduciary duties and business judgement rule in the US

  • Which reminds us that the participatory process/platforms can be of different kinds:

  • In the judiciary

  • In working councils

  • In town councils a la Barcelona

  • Mixing dialogical/analogical debate

  • Which are the potential ACTORS of mobilization?

    • Organised Labour : automation, bad jobs, gig economy, low wages
    • Women: AI machism
    • Kids: sexual abuses, porno
    • Citizens: mobility
    • Citizens: health … a dimension of life where people are becoming awareof bad uses (insurance, use of DNA)
  • Therefore, one should move forward according to the following sequence:

    • Identify, country by country or even places by places, where people’s concern on bad uses (or on forgone good uses) is high and there it is being organised: i.e. where there is a potential demand for “competence support”, both at conceptual and technological level
    • Concentrate on these contexts and provide them with the conceptual framework
    • Turn the “pars destruens” into a “pars costruens”
    • Building at EU level a network that allows horizontal comparison for the same issues both of threats, actions and results
    • Ensuring at EU level the availability of a competence centre that deals (not necessarily solve, but ai least identify and tackles) with the meta-obstacles preventing the implementation or curtailing the survival of“good” uses.
  • Complementary activities that emerged from debate:

    • Following the “substantial freedom” framework a code of behaviour should be established and monitored, by which all categories playing a role in developing algorithms - scientists developing Algorithms, politicians supporting/calling for their use, public and private administrators using them - should ask themselves: What is the effect on the capabilities of people effected by the algorithms?
    • Analogic/dialogic …. Online/Offline Experimenting platforms combining the two is a priority. In particular, it should be a practice for public administrations and politicians to create dialogic platforms where the above code of behaviour is checked.
    • Education/sensibilization on the implementation of the above conceptual framework in order to identify good/bad uses is a priority. It should first of all engage kids from a very early age: a new saga of cautionary tales … would help.
  • We are trying to understand new systems and optimization of their functions. What happens when the system causes externalities? optimization systems make population wide decisions. Giving a bunch of signals - example of uber (feedback loops, to maximize their profits). The app sensing cars - getting you out of the traffic. Caused problems in smaller streets. The app wasn’t able to change the algorithm, the municipality can’t do anything. The citizens install roadblocks as a solution. Can we build systems to help people against optimization systems. POTs. Facebook matching people to the advertisers - people are subjected to data getting to them regardless of how they interact on the platform. Agency is very limited because it’s about the whole infrastructure.

  • workers at amazon wanted to a strike, but amazon knew and got 200 extra people, so they could continue., After that they created another warehouse, so it couldn’t be disrupted.

  • There are going to be local wins for a bit, but it will take a bit for a larger win.

  • The most of the work in the AI is the migration and race. We shouldn’t go just the labor way. The migrants are getting organized but they don’t have labor unions.

  • If actors say they need algorithms: we need to ask them why?

  • we need to ask different questions, we need to ask a fundamental question: why are we buying into these systems and who hold the powers? Individual agency is extremely limited, so we need to look collectively at taking action.

  • There is not going to be a single collective that will resolve this.

  • I’m more into that if we know what is wrong how we can mobilize people and now we are about what is the thing that we are mobilizing about.

  • A lot of the time when people are faced with systematic conversation or binary boxing in for example: Employment models aren’t working, you can’t break out of those and you have to work with others these new ideas to making it work.

  • You can only break through these models and narratives by working together and creating new narratives.

  • When we know what we want to do and where to go, we can create a new future, something completely different from what the AI is predicting perhaps. Thus create the space that is more free.

  • Participatory art - the art that throws away the separation between art and the spectator. Because of the consumerist society…the point is not to create “great art” but great transformative experience together with other people. Example of the Borderland. They come into this world, change the mindset, lots of the traditional concepts start to break. Creating completely new reality. Tackling the imposed machine. There is something there but don’t know what.-

  • something as vague as my data became very big, most people understand there is something about data we should care about. We only need this data once, which later becomes governed by something different. The algorithm becomes irrevocable. How do you challenge that? Even if you own your data people will sell it.


Post Workshop Participant reflections


Matthias

Original Post

  • Value-based software architecture. Scuttlebutt deprioritized making their software available for multiple devices based on their values of “this being software that is made for people who only have one device”. That’s a major architectural decision, which might not be possible to adjust later without rewriting the whole software. So they really poured their values into their software. In comparison, politics is not yet good at getting its values implemented into technological developments. So we need a better process that implements our values into our technology. It’s about a process strong in accountability.
  • What is human? If an actor (any party / organization) says they are human centric, they often do not even define what “human centric” means in their case. For example, “human” in “European human centric Internet” is left undefined. This generates conflict potential, as it stays so general.
  • The economics behind AI. There’s an interesting study of “the cost of developing a universal chips after the end of Moore’s law”. It means that now that we’re at the end of Moore’s law, the money sunk into developing faster chips is unlikely to come back. So instead the industry took a risk by developing specialized chips. There are two main types of such chips: AI and blockchain. That’s the only reason why AI became a hype and we’re talking about it: it is pushed on us, because industry needed a new profitable outlet for investments, and high levels of capital investment are backing AI already. “If we are not buying it, it’s going to go down. If we are not buying it, we are going to go down.” We are still in the process of making that choice if we (also: the Commission) want to invest money into AI.
  • Good and bad AI architecture. Let’s differentiate between “AI for research and solutions” and “AI for the production of services”. The first type is benign research aimed to solve intricate problems, for example done by universities. The second type is commercial SaaS software that scoops in data out of profit interest of the company. Maybe Google maps might in the future adapt your routing so that you see adverts of parties that paid Google for audience for these adverts.This means: the problem of this is about the economics of who runs the datacenters: Amazon and Microsoft and Google built “clouds”, data centers for people to run their applications. Due to economics of scale, they provide the cheapest solution, but also are able to monitor and keep the data going through them. This is an undemocratic process for plain economic reasons, and it’s a hard problem to crack.
  • The structure defines the function. The type of governance structure defines how a new technology gets used. So it may be that we have allowed the wrong governance structures to happen, which will lead to the wrong outcomes of AI technology. In addition, Google and Facebook have been advertising companies but are not anymore – trying to rule them in as advertisers with regulation is already not applicable anymore, instead we should rule in their new structure of “AI first” companies.
  • Is AI anti-human by definition? What AI (as rebranded Bayesian statistics) does is to put the individual differences between human beings (everyone’s “spark of the divine”) into the epsilon, the error term at the end of the equation. That makes AI non-human-centric by design. Because the definition of human for an individual is “that which cannot be predicted by AI, which is not part of the ‘normal’”.
  • On tech interfering with relationships. Intermediating the patient-doctor relationship with data collecting and analyzing systems has degraded the value of that relationship. Because medicine is not about diagnosis, but prognosis: improving a patient’s future condition, and that is a negotiation with the individual, and that individual might be very much non-average, refusing certain treatments etc., and should have and keep a right to that individualness. That still allows for tech systems that could benefit relationships – it’s just that the tech systems we have currently in medicine do not do that.

Personal reflections

I want to clearly distinguish between systems of exploitation and the technology itself. Any system of exploitation, including capitalism, will find and use any new technological option in exploitative ways. The history of digital innovation and media innovation in more general is quite rich in that: before AI for behavior prediction it has been targeted advertising, tracking, mass surveillance programs by governments, weaponized drones, propaganda micro-targeting, mass media use for state propaganda etc…

That’s purely a social problem, not a technological one. Adding one more technology cannot make it much worse … nukes are already around, so what technology could offer significantly worse potential outcomes? So to deal with it, you don’t have to make legislation for AI, but against capitalism.

AI in itself is a nice tool in the toolkit, and easily allows for beneficial use. Personal example: my open source coffee sorter project uses deep learning based image classifiers to relieve small-scale farmers from sorting your coffee by hand.

Inge

Original Post.

Thanks so much @nadia for organizing such an important event on such a crucial topic. For me the most important takeaways for now (as you said, we’ve just started scratching the surface of something so detrimental) are the following:

  1. we need to increase tech literacy about the systems in place: if we (individuals, governments, policymakers) don’t know how the black boxes work, how do we know that what they promise us is actually what results they give us. How can we question outcomes if we aren’t informed about the manner in which the systems arrive at conclusions?
  2. we also need to understand the tipping point into creating action: when was the last time you really got upset and demanded change? The problem lays again with the fact that we often don’t even know about the injustices (from increased traffic, to the inability to speak to people instead of machines). This topic I think relates to the first point. We can’t be angry to the point of taking action (protests, etc), if we don’t even realize/understand where the injustice is coming from. Side note: we may even want to take it further: should we use certain systems if we don’t know how they work?
  3. If actors say they need certain systems, we need to ask them why. Why not do it by hand? The idea is that these systems will solve everything, but they are just accelerating broken systems.

Alberto

Original Post

In general, the group felt the need to spend quite some time to get a grip on a shared idea of what AI actually is , and how it works, before he could discuss its regulation. Occasionally, the discussion veered into being quite technical, and not everybody could follow it all the time, even if we had an over-representation of computer scientists and digital activists. This is in itself a result: if this crowd struggles to get it, democratic participation is going to be pretty difficult for the general public.

We used Chatham House rule , so I in what follows I do not attribute statements to anyone in particular.

We kicked off by reminding ourselves that the new von der Leyen Commission has promised to tackle AI within the first 100 days of taking office. Brussels is mostly happy with how the GDPR thing played out, that is by recognizing the EU as the de facto “regulatory superpower”. The AI regulation in the pipeline is expected to have a similar effect to that of GDPR.

Participants then expressed some concern around the challenges of regulating AI . For example:

  • A directive may be the wrong instrument. The law is good at enshrining principles (“human-centric AI”), but in the tech industry we are seeing everyone , including FAANG, claiming to adhere to the same principles. What we seem to be missing is an accountable process to translate principles into technical choices. For example, elsewhere I have told the story of how the developers of Scuttlebutt justified their refusal to give their users multiple-device accounts in terms of values: “we want to serve the underconnected, and those guys do not own multiple devices. Multiple device account is a first world problem, and should be deprioritized”.
  • The AI story is strongly vendor-driven: a solution looking for a problem. Lawmaking as a process is naturally open to the contribution of industry, and this openness risks giving even more market power to the incumbents.
  • AI uses big data and lots of computing power, so it tends to live in the cloud as infrastructure. But the cloud is itself super-concentrated, it is infrastructure in few private hands. The rise of AI brings even more momentum to the concentration process. This brings us back to an intuition that has been circulating in this forum, namely that you need antitrust policy to do tech policy, at least in the current situation.
  • The term “AI” has come to mean “machine learning on big data”. The governance of the data themselves is an unsolved problem, with major consequences for AI. In the health sector, for example, clinical data tend to be simply entrusted to large companies: the British NHS gave them to Deep Mind; a similar operation between the Italian government and IBM Watson was attempted, but failed, because data governance in Italy is with the regions, and they refused to release the data. We learned much about the state of the art of the reflection on data governance at MyData2019: to our surprise there appears to be a consensus among scholars on how to go about data governance, but it is not being translated into law. That work is very unfinished, and it should be finished before opening the AI can of worms.
  • AI has a large carbon footprint. Even when it does improve on actual human problems, it does so at the cost of worsening the climate, not in a win-win scenario.
  • The Internet should be “human-centric”. But machine learning is basically statistical analysis: high-dimensional multivariate statistical models. When it is done on humans, its models (for example a recommendation algo) encode each human into a database, and then models you in terms of who you are similar to: for example, a woman between 35 and 45 who speaks Dutch and watches stand-up comedy on Netflix. Everything not standardizable, everything that makes you you , gets pushed to the error term of the model. That is hardly human-centric, because it leads to optimizing systems in terms of abstract “typical humans” or “target groups” or whatever.

As a result of this situation, the group was not even in agreement that AI is worth the trouble and the money that it costs . Two participants argued the opposite sides, both, interestingly, using examples from medicine. The AI-enthusiast noted that AI is getting good at diagnosing medical conditions. The AI-sceptic noted that medicine is not diagnosis-centric, but prognosis-centric; it has no value if it does not improve human life. And the prognosis must always be negotiated with the patient. IT in medicine has historically cheapened the relationship between patient and healer, with the latter "classifying " the former in terms of a standard data structure for entry into a database.

Somebody quoted recent studies on the use of AI in medicine. The state of the art is:

  • Diagnostic AI does not perform significantly better than human pathologists. (Lancet, Ar.xiv)
  • Few studies do any external validation of results. Additionally, deep learning models are poorly reported. (Lancet)
  • Incorrect models bias (and therefore deteriorate) the work of human pathologists (Nature , Ar.xiv)
  • There are risks that AI will be used to erode the doctor-patient relationship (Nature )

Based on this, the participant argued that at the moment there is no use case for AI in medicine.

Image credit: XKCD

We agreed that not just AI, but all optimization tools are problematic, because they have to make the choice of what, exactly, gets optimized. What tends to get optimized is what the entity deploying the model wants. Traffic is a good example: apparently innocent choices as what to optimize for turn out to have huge consequences. When you design traffic, do you optimize for driver safety? Or for pedestrian safety? Of minimize time spent on the road? Airport layout is designed to maximize pre-flight spending: after you clear security, the only way to the gate goes through a very large duty free shop. This is “optimal”, but not necessary optimal for you .

We next moved to data governance . Data are, of course, AI’s raw material, and only those who have access to data can deploy AI models. A researcher called Paul Oliver Dehaye wants to model discrimination of certain workers. Do do this, he needs to pool the personal data from different individuals into what is called a “data trust”. Data trusts are one of several models for data governance being floated about; the DECODE project’s data commons are another.

In this discussion, even the GDPR’s success appears to have some cracks. For example, Uber is using it to refuse it to give drivers their data, claiming that would impact the riders’ privacy. A participant claimed that the GDPR has a blind spot in that it has nothing to say about standards for data portability. U.S. tech companies have a project called Data Transfer, where they are dreaming up those standards, and doing so in a way that will benefit them most (again). Pragmatically, she thought the EU should set its own standards for this.

We ended with some constructive suggestions . One concerned data governance itself. as noted above, the MyData community and other actors in the tech policy space have made substantial intellectual progress on data governance. Were this progress to be enshrined into EU legislation and standards setting, this would maybe help mitigate the potential of the AI industry to worsen inequalities. For example, saying “everyone has a right to own their data” is not precise enough. It makes a huge difference whether personal data are considered to be a freehold commodity or an unalienable human right. In the former case, people can sell their data to whomever they want: data would thus be like material possessions. In this case, market forces are likely to concentrate their ownership in few hands, because data are much more valuable when aggregated in humongous piles of big data. But in the latter case, data are like the right to freedom. I have it, but I am not allowed to sell myself into slavery. In this scenario, data ownership does not concentrate.

Another constructive suggestions concerned enabling a next-generation eIDAS, to allow for “disposable online identities”. These are pairs of cryptographic keys that you would use for the purpose of accessing a service: instead of showing your ID to the supermarket cashier when you buy alcoholic drinks, you would show them a statement digitally signed by the registrar that says more or less “the owner of this key pair is over 18”, and then sign it with your private key. This way, the supermarket knows you are over 18, but does not who you are. It does have your public key, but you can also never use that key pair anymore – that’s what makes it disposable.

Further suggestions included legislating on mandatory auditability of algorithms (there is even a NGO doing work on this, AlgorithmWatch), investments in early data literacy in education, and designing for cultural differences: Europeans care more about privacy, whereas many people in Asia are relatively uninterested in it.

Fabrizio

In order to move ahead and to make the most of all ideas and human resources working these days on AI, we need both to understand where potential mobilization is taking place or it might take place, and to have a conceptual framework to move ahead.
• Since AI can produce either bad effects (harm) or good effects … the appropriate conceptual framework should offer a definition of bad/good so as to
• assess current uses of AI,
• mobilize people both “against” bad uses and “for” good uses”
• design, experiment and revise alternative “good” uses
• Sen’s concept of Substantial Freedom (A Theory of Justice) is a choice: the capability approach, where we consider “good” all moves/actions that improve the capacity/capability of every person to do what considers valuable (not futile capacities, but central to one’s life: dignity, learning, health, freedom to move, human relations, …)
• This approach does not incapsulate what is “bad” or “harmful” in a rigid box, but it calls for a participatory/societal assessment of how the general principle should be implemented, an assessment that can be continuously updated through public debate
• Not so new: it was used by lawyers in the corporate assessment of “just”/”unjust” fiduciary duties and business judgement rule in the US
• Which reminds us that the participatory process/platforms can be of different kinds:
• In the judiciary
• In working councils
• In town councils a la Barcelona
• Mixing dialogical/analogical debate

Which are the potential ACTORS of mobilization?
• Organised Labour: automation, bad jobs, gig economy, low wages
• Women: AI machism
• Kids: sexual abuses, porno
• Citizens: mobility
• Citizens: health … a dimension of life where people are becoming awareof bad uses (insurance, use of DNA)

Therefore, one should move forward according to the following sequence:
• Identify, country by country or even places by places, where people’s concern on bad uses (or on forgone good uses) is high and there it is being organised: i.e. where there is a potential demand for “competence support”, both at conceptual and technological level
• Concentrate on these contexts and provide them with the conceptual framework
• Turn the “pars destruens” into a “pars costruens”
• Building at EU level a network that allows horizontal comparison for the same issues both of threats, actions and results
• Ensuring at EU level the availability of a competence centre that deals (not necessarily solve, but ai least identify and tackles) with the meta-obstacles preventing the implementation or curtailing the survival of“good” uses.
Complementary activities that emerged from debate:
• Following the “substantial freedom” framework a code of behaviour should be established and monitored, by which all categories playing a role in developing algorithms - scientists developing Algorithms, politicians supporting/calling for their use, public and private administrators using them - should ask themselves: What is the effect on the capabilities of people effected by the algorithms?
• Analogic/dialogic …. Online/Offline Experimenting platforms combining the two is a priority. In particular, it should be a practice for public administrations and politicians to create dialogic platforms where the above code of behaviour is checked.
• Education/sensibilization on the implementation of the above conceptual framework in order to identify good/bad uses is a priority. It should first of all engage kids from a very early age: a new saga of cautionary tales … would help.


PRE- FESTIVAL INTERVIEWS

Overview of all 10 Stories: Generic audience

Experts working at the cutting edge of tech, policy and human rights spaces call for greater consideration to be given to the way that AI is altering the very fabric of society. Both academics and policy makers agree that current AI systems risk amplifying existing human biases that entrench inequality - not least because of the widespread, and misguided, perception that data is inherently neutral. Systems for the reporting of sexual assault, for example, are encoded with developers’ assumptions about sexual violence which do not grasp the complexity of victims’ experiences. Those working and researching in the field caution that AI systems are only as unbiased as the humans who build them, and that continuing to give AI undue weight will cause serious harm to the most vulnerable in society.

Across medicine, law, entrepreneurship and gender studies, concern is growing that the pressure to innovate for its own sake comes at great risk to privacy, protection and human rights. Experts caution that companies in both the public and private sectors are rushing to implement technology they don’t fully understand, and that it is crucial due care and consideration is taken when building these systems to ensure they are value-driven and accountable. Unchecked digitisation in the fields of medicine, social work and the reporting of sexual assault fails to recognise human needs which are more nuanced, individualised and unpredictable than AI can fathom. It is critical we ensure that AI is designed and used in such a way that it serves society’s needs – not the other way around.

Examples from each story

  • Marco: In predictive medicine, there are so many biological variables that scaling data up can decrease precision
  • Corinne: complex encryption systems built to protect an activist were found to be lacking when a police officer simply stole his phone from his hand.
  • Peter:
  • Seda:
  • Hugi:
  • Kate:
  • Justin: mentioned two examples in a comment somewhere…
  • Oliver:
  • Fabrizio:
  • Alberto:

Social media updates for media-friendly summary articles

  1. Self-reinforcing loop of data puts human rights at risk

  2. Unchecked digital expansion could be a force for democratisation – or further entrench inequality

  3. It is critical that AI serves people – not the other way around.

  4. Discourse must move from technology to societal impact

  5. Perception of data as inherently neutral is dangerous to society

  6. Putting human rights at the forefront of digital expansion is critical

  7. One-size-fits all approach to AI in public services breeds division and intolerance

  8. Interoperability key to sustainable innovation

  9. Meaningful debate on AI must be prioritised above innovation.

  10. Digital space currently failing to protect women and victims of sexual violence

The rush to implement AI too widely and simplistically could have disastrous consequences

A conversation with Marco Manca

Marco Manca is an interdisciplinary researcher in mathematics and informational systems with an educational background in medicine. He founded the SCimPulse Foundation, which he still directs, and is also part of several scientific organisations and commissions, including the working group of NATO for human control over autonomous systems.

Marko feels there is a lot of excitement about AI and a push to accelerate its implementation widely - but that it is crucial we consider AI a “nifty tool” to use with awareness, rather than an impeccable “leader” that must not be questioned. AI systems are only as good as the data inputted and the questions asked of them by humans. This means that the conclusions returned are not free of human biases, but rather potentially amplify them. Essentially, as they are used now, AI systems simply return the same results as humans would, just “faster and dumber.”

This is a concern because of the rush to implement AI, particularly in the field of medicine. In medicine there is an expectation of precision, but with so many biological variables, the more precise you get the more you diverge, so large scale information potentially becomes less valuable. For example, in the 1970s, various tools were introduced to help doctors predict the likelihood of certain diseases, but attempts to refine these profiles over the years have hit a barrier. Just as you could play a lottery with 1/1000 odds every day for a thousand days and still not win, there is a crucial difference between “the destiny of the person in front of you right now, the destiny of every similar person.”

His argument is not that we should not be developing AI, but that we must consider how we develop and implement it and how we contextualize the information it gives us. If we simply scale up the information we work with now without being informed about the risks, we risk causing serious damage.

Participate in the conversation with Marco here: What does it take to build a successful movement for citizens to gain control over when, how and to what use automated systems are introduced and used in society?

AI is changing the fabric of society - it is crucial to ensure it serves people, not the other way around.

A conversation with Seda F. Gürses

With an undergrad degree in international relations and mathematics, Seda is now an academic at Delft University, focussing on how to do computer science differently - utilising a interdisciplinary approach to explore both concepts of privacy and surveillance, and also looking at what communities need. They were led into this field of study by their fascination with the politics of mathematics and the biases contained within seemingly neutral numbers.

The technological landscape has changed enormously in the past few years — from static software saved on a disk that was only updated every once in a while, to software and apps that are held on services and so constantly updated and optimised. A whole new host of privacy and security issues have arisen, and thus the need for a computer science which secures and protects the needs of its users.

The negative consequences of prioritising optimisation over user experience can be seen in Google Maps and Way, which sends users down service roads to avoid freeway traffic. They don’t care that this has an adverse impact on the environment and local communities, or even that it actually causes congestion in smaller roads. Further, Uber has optimised its system to outsource risk to its workers: instead of paying people for the time they work, Uber offers them a system that tells them when they are most likely to get customers so that they can manage their individual risk.

When this kind of tech injustice is applied to public institutions such as borders and social welfare systems, the discrimination embedded in the very systems mean we are changing the fabric of society without having the necessary discussions as to whether that’s something we want to do. We need to stop talking about data and algorithms and focus on the forms of government we want these technologies to have. It is crucial that the computational infrastructure boosted by AI serves people, not the other way around.

Participate in the conversation with Seda here: coming soon

Tech is not the simple solution to complex social situations.

A conversation with Peter Bihr

Pbihir co-founded ThingsCon community which advocates for a responsible, human-centric approach to the Internet of Things. Smart Cities, where the digital and physical meet and where algorithms actively impact our daily lives, is an important focal point of his work.

He proposes reframing the Smart City discourse (currently dominated by vendors of Smart City tech) away from the technology and more towards a focus on societal impact. What better urban metrics can we apply to cities increasingly governed or shaped by algorithms? Such an analytical framework would be the key to unlocking a real, meaningful debate. Smart City policies must be built around citizen/digital, human rights, and with emphasis on participatory processes, transparency and accountability.

At the most recent ThingsCon conference, Manon den Dunnen shared her experience of unintended horrific consequences of tech going wrong when police officers take phone numbers of both victims and suspects, and then Facebook algorithms then suggest one another as friends.

Further, several studies have shown policing and/or justice related algorithms were found to have racist data points (including some deemed illegal by courts yet remained in the data sets). And the policing algorithm in NYC measures effectiveness by such simplistic metrics that created incentive for officers to report selectively (for example, the systemic intimidation of rape victims to change their charge from rape to a more minor offence).

Participate in the conversation with Peter here: How can we put humans/citizens first in our smart city policies?

Unchecked digital expansion could entrench existing biases and power inequality.

A conversation with Corinne Cath-Speth

With a background in human rights and policy, Corinne Cath-Speth worked as a policy officer for a human rights NGO in London before coming to the Oxford Internet Institute and the Alan Turing Institute to pursue her PhD. Her research focuses on human rights advocacy efforts within Internet governance, with a broader interest of how human rights NGOs are responding to the new (and old) challenges raised by emerging technologies.

In working with human rights activists, CCS saw that digital technologies - like social media - can give the plight of activists more visibility, but that often these same technologies entrench existing power inequalities and biases. She became interested in studying what happens when activists try to change the infrastructure of the internet itself, rather than simply use it. A number of well-known human rights organizations like the ACLU and EFF, actively do so by contributing to Internet governance fora. She found that these organizations are welcome and can operate in these spaces with relative ease, given their open and multistakeholder nature. At the same time, she also saw that while getting the tech “right” is an important part of the puzzle of human rights advocacy in the digital age, it is also a narrow frame through which to understand the broad spectrum of social concerns raised by networked technologies.

CCS’s work in Internet governance also led her to consider human rights advocacy in AI governance, as AI systems are raising a host of questions regarding privacy, safety, anti-discrimination and other human rights. One of the problems with developing AI advocacy programs is that many of these systems are developed by private companies, so it is difficult to gain access to their technology to examine and understand it. Many NGOs are therefore calling for the regulation of AI systems, but are facing pushback, with companies arguing that it hampers innovation. Yet, it is this same “innovation” that encourages many governments to deploy AI systems. A drive for “innovation” for innovation’s sake is particularly concerning when it encourages governments to step into technologies that they don’t fully understand or even need.

Obviously, a lot of human rights NGOs have been worried about these various dynamics for a while and are consistently raising their concerns— sometimes by bringing in academic work to show some of these issues. Human Rights Watch, for example, has a great program as does Amnesty International, Privacy International and Article 19. Several of the largest human rights NGOs are focusing on issues of AI systems and bias. But they’re also forced to play whack-a-mole as the application of AI systems becomes more common. How to focus your resources? Which companies and applications are most concerning? Which solutions most tractable and comprehensive? Do we need sectoral guidelines, or do we need guidelines which focus on impact? Do we need self-regulatory ethics frameworks or hard data protection frameworks? All of the above? These are the issues I see a lot of NGOs grapple with and are questions I hope to discuss with you on this platform.

Participate in the conversation with Corinne here: What does the future of civil society advocacy look like, given the prevalence of these digital technologies and their impact on the work that civil society is currently doing?

Self-reinforcing loop of data could put human rights at risk.

A conversation with Justin Nogarede

Justin Nogarede works for the Foundation for European Progressive Studies, was previously at the European Commission focussing on competition law and European regulations.

As a trainee in the application law unit at the European Commission, he became aware of the issues involved in ensuring member states comply with EU law, finding that often there isn’t the staff or resources available to enforce directives - for example, the directive on data protection has existed since 1995, but was not widely enforced. Justin now focuses more on data governance, and is finding that as new digital infrastructures are rolled out, they are driven by narrow efficiency concerns and are not accountable. Looking into these new infrastructures is a great opportunity to make the system more participatory and accountable - but we have to take it.

Feeding existing data into AI systems can create problems - for example, when predictive policing has been shown to drive more officers into wealthy areas, as data shows a higher rate of arrests in those areas. Data therefore creates a self-reinforcing loop. Further, digital systems often rely on a binary logic, which healthcare and social problems simply don’t fit. The key problem is that data is a simplification of the real world. Further, some AI systems may support a conservative bias, such as when they are used to predict which offenders are most likely to reoffend.

Regulation of digital infrastructure would be a step in the right direction, and the argument that it would stifle innovation is weak - technological advances must make sense and make lives better. It may not be possible to have 100% compliance, but more involvement of public authorities (even at local level) would be a good step, as would more transparency over how these technologies function. Why is all this innovation not channelled into ways for people to live a better life?

Participate in the conversation with Justin here: Why is all this innovation not being channeled into ways for people to help them live a better life?

Distributed systems promise great possibilities - and challenges.

A conversation with Hugi Asgeirsson

Earlier this year, Hugi was in Berlin for the Data Terra Nemo conference, focussing on decentralised web applications which are hosted without traditional servers, allowing for a lot of interesting applications.

They were inspired by the human-centric community that has grown up around ‘gossip’ protocols like Scuttlebutt. It seems to be forming a playground where new and radical ideas can be tested and implemented. The original developer of Scuttlebutt, Dominic Tarr, describes his MO as: “not to build the next big thing, but rather to build the thing that inspires the next big thing, that way you don’t have to maintain it” and this seems to have set the tone for Scuttlebutt itself.

One of the core elements of Scuttlebutt is that users can host data for other people on the network without being directly connected to them. This has the positive effect that users in countries where internet usage is highly restricted can connect via other users - though on the other hand, this also means that users could unwittingly be hosting information they would rather not propagate. There have been instances of the Norwegian alt-right using Scuttlebutt to communicate. Scuttlebutt has been working to address this issue but solutions are imperfect so far.

The bottom line is that distributed systems such as Scuttlebutt are both democratising and empowering and they come with a whole new set of possibilities and challenges.

Participate in the conversation with Hugi here: Data Terra Nemo: First report & Scuttlebutt

Open-source technology and interoperability are key to sustainable innovation

A conversation with Oliver Sauter

There is an accepted “law” of entrepreneurship: in order to build something valuable, you have to be ten times better than what already exists (as proven by Google + which arguably had better some better features than Facebook but failed to tempt people away). Why does this law exist and what can we do to change it?

BlackforestBoi believes it comes down to: Costs to switch (time and mental effort) > additional benefit offered *~10. Such growth requirement has produced some great leaps in innovation - but these come with downsides. How would the world look if we focused more on incremental innovations?

What is holding us back? The way companies make money for one: the 2nd quarter of 2018, Facebook lost $120bn (billion!) in stock value within 48 hours, the biggest loss of any company in history. The reason: It posted the least growth since its founding, while still making 5BN in profits the same quarter and growing by 42% since the last year. Secondly data and social lock ins creates counter incentives for interoperable formats which would make it easier for users to migrate between services or integrate them,

Breaking this dynamic would require tackling the problem from multiple angles: namely allowing users to move freely between services, creating economic models that reward quality of service rather than simple growth, and ideally adopting Open-Source software to allow companies to build on one another’s work.

WorldBrain (dot) io is building open-source software in an attempt to enable incremental innovation, the foundation of which is Memex - an open-source privacy tool. Interoperability is baked deep into the core of Memex, it is fully open-source and WorldBrain (dot) io has no stock value, so is entirely focussed on building a sustainable service.

Participate in the conversation with Oliver here: Startups’ grand illusion: You have to be 10x better than whats there

Civic organisations are key to turning AI technology into a force for civil justice

A conversation with Fabrizio Barca

With a varied background in banking and treasury and more recently inequalities and justice, Fabrizio previously worked at the EU and now with a civic organisation called the Forum for Inequality and Diversity.

The current crisis in Western countries is driven by inequality - in particular the paradox that we have the technology to create equality but instead it is producing an unprecedented concentration of knowledge and power in very few hands. This must be addressed by putting political pressure on the issues. State-owned enterprises and collective platforms where people can put together data that everyone has access to could turn these technologies into forces for social justice.

A one-size-fits all approach to AI in public administration grows resentment. It deprives people of the most important thing - human connection and a sense of being recognised - which breeds intolerance, division and a loss of trust in democracy. Fabrizio is just coming to understand how effective civic organisations can be, not just in advocacy work but in taking action to shape services in local areas. He hopes to gain even more understanding through discussing the topics with a mix of people from different backgrounds.

Learn More : Conversation with Fabrizio Barca Founder, Forum on Inequalities and diversity I Ex General Director, Italian Ministry of Economy & Finance

Digital spaces currently failing to protect women and victims of sexual violence

A conversation with Kate J Sim

A PhD researcher at Oxford Internet Institute, katejsim studies the intersection of gender-based violence and emerging technologies. Her work focuses on issues of trust, gender and sexual politics, and the double-edged role of technology in facilitating connections but also targeted harassment. While organising against campus violence she personally experienced cyberharassment and lack of support from law enforcement. More resources have become available since then, but we need to change how we conceptualise these issues and fundamentally change the design of the platforms.

She helped to form a cross-campus network that grew to a nonprofit organisation, Know Your IX. The space requires better structures in place to support mental health and protection from cyberharassment to reduce burnout. Research shows again and again that women, especially women of colour, tend to self-censor and reduce their visibility in order to survive - it is crucial we put more safeguarding in place to protect them.

Digital systems designed to facilitate disclosures, collect evidence and automate reporting of sexual assault are attractive to institutions because of their efficiency - and to some extent to victims as they are perceived to be objective and neutral. However, these systems have bias encoded in them. The designers are working with their own understanding of sexual violence, which may not match victims’ experiences. Some victims don’t have the data literacy or English level to work the systems, which could compound their trauma. Further, the pressure to report is encoded into the design of these systems, but this is a misguided emphasis on a single optimal solution, which is not appropriate for all victims. De-emphasising reporting and focussing on “small data” driven by relationship building can create a structured conversation which is rich, insightful and telling.

Rather than asking how tech can be fixed for the better, the more urgent and important question is: who and what are we overlooking when we turn to tech solutions? How can we support practitioners in anti-violence space, like social workers, jurors and judges, and advocates, with data and tech literacy, so that they have control over how they interpret and act on data?

Participate in the conversation with Kate here: Can tech design for survivors? How sex, violence, and power are encoded into the design and implementation of data/AI-driven sexual misconduct reporting systems

Focus on scalar indicators is driven by need to describe reality, not change it.

A conversation with Alberto Cottica

Alberto had been hoping that ehnography-at-scale via SSNA could integrate, if not replace, the indicator paradigm, but after trying to get people to assess their own willingness to pay for, say, avoiding the extinction of some type of frog in Madagascar, or lowering the PM10 content by 10%, he found convincing evidence that we could never trust our results

He refers to James Scott’s convincing argument for scalar indicators being propelled by the modernist ideology that underpin the coalescing of modern states. This works for states but not so much for people. Modern states have created a demand for scalar indicators, but this has more to do with their thirst for administrative order than with a drive to understand what is really going on.

Participate in the conversation with Alberto here: On assessing impact, and what Edgeryders could do in that department

1 Like

NB: We need to have content tagged up with these tags to enable us to see clusters of discussions where they pop up
AI
Social justice principles
Data governance
Markets & Data
ethics
gender
Inequality technology
wellbeing digital
artificial Intelligence internet
platform
internet-of-things
welfare
machine-learning
fairness
digital-democracy
blockchain
justice
artificial-intelligence
internet-infrastructure
platform-economy
gig-economy precarity
europe
algorithms
bias

hi @amelia @ccs and @alberto figured you might find this useful.