If we want to shape the common sense five or ten years from now what will be considered relevant?
Technology leads to bad or good results: Principle of social justice
Amartya Sen’s principle of substantial freedom, a sustainable substantial freedom. We need to have capacity/capability to live the life we want
As worker, I want it to help me to avoid discrimination in subjective interviews, but not to be used to have AI discriminating against me because of previous behavior of “my” group
My freedom should be enhanced
Putting on the table simple methodology that allows us to understand the just and the unjust
This kind of principle is not the one we take out, it’s a methodology, an open-ended principle - it can be filled with content, case by case only through the participatory mechanism.
Different subjective values of different contexts can lead to conflict- case by case through participatory processes is the solution - not just regulation
We slowly develop what we consider to be just
What will general principles about what we consider good or bad look like or work in practice?
What is something you are struggling with? What is out there to make our own biases less visible to the algorithms?
How can we get emotionally engaged into the ai discussion? What do you want to push against? Where are injustices related coming out of this?
Use of AI in judicial and social welfare, like in housing projects who gets a house, based on an algorithm. But people need to have someone recognize them as a human people, But we aren’t recognized as human beings anymore, discrimination grows because the human to human contact is no longer there. But the problem is that is that there are sooo many more things happening that are not implicit. The way labor was exploited in capitalism, enormous change happened. But now this is being copied again. But it isn’t causing an enormous reaction. The unions are weak.
Credit for a household going to banks. Information about our behaviour. Families are asked a high rate, the reduction of social justice. Where does the right-wing voter come from? There is a big job of just making clear what is happening. What happened in the State was evident, but here we lack information, children should understand.
We need to:
Educate, how for bad and for good it can used
Judiciary needs to be brought into the game
We need to bring back people’s wellbeing into it.
It isn’t necessarily about tech, it’s about people. The solution isn’t marketization.
in terms of mobilization and justice, perception of the actions and reactions of justice. We’ve seen the same thing playing out in different places for different reasons. Obvious injustice - the link between the situation and the AI land where this is scaled up. What happens when it scales up?
labor, women, kids, citizens, consumers. It can react to a machine, but it is also a reaction to social issues, national bargaining in negotiation of algorithms. People are coming together against the gig economy. The platform is private, its proprietary, they are angry. They know there is misuse.
In the US, if you read the amount of literature by women on gender, there’s an abundance, but within the EU this is not the case.
With regards to kids’ privacy, their are flashes of anger, but no movements.
With users of mobility services, there is some action from citizens. For example in Milan they are now trying to launch a platform for mobility.
On anger, you always have to have a structure. Otherwise it may be used by the wrong people or die out.
Social justice: is this algorithm being used to enhance or reduce the capabilities of humans. On all the different fields.
Putting the structure on the anger. You need to turn it into a request.
We’re not talking enough about the consequences. I personally do not feel threatened to automation, but many people are. I don’t believe in any of the numbers that go about. What is really happening is the quality of jobs with reflection on people’s lives, relations, the capacity of algorithms…people are being told one day to another if they will work. This is killing their lives. The unions - representing labor, have a reason to mobilize people. If you are looking for a bomb shell I would study them more, the role of people like us. The unions are not equipped these days, we were told they are not necessary anymore, sometimes conservative. I would invest in capacitation of the union.
it’s like the amazon example, they weren’t unionized, but they protested and had demands and got them, and unionized after. But I am stressing that labor is a large area where things come along.
Why are we putting the wellbeing of people behind proprietary systems?
If actors say they need algorithms: why?
What are the reasons people are unable to act upon algorithmic issues in society?
Where is the space in which humans can have agency?
The three steps: let them pay taxes; mobility of capital
We are discussing this within these constraints, because we can’t change the whole world, but if there’s some solidarity between the unions — organised labor, if we can give them higher capacity, and then they won’t let them be abused, it is doable. They are very weak technological speaking, there is no negotiation anymore.
We have a lot of things like roads with are public, you can’t make a private road and make it public. But for some reason, we don’t have the same idea about data, and why not?
if you perceive the data is being used out of control then you are boycotting it. The results of systematic mistakes, decisions have been taken by the private corporations and when the public sector is using them it is doing for its own purposes. Talking about the problem that is not in algorithm but our pressure is taken out of context, the common discourse has taken the public sector out. Even the illusion of internet as being something out of the control of the state. The state has been wrongly used. Even in the education - the teachers in Italy weren’t given explanations why they are being placed somewhere. But this is not because of the data. The algorithm was badly used.
Why are we buying into these systems and who hold the power?
his own notes from the event:
In order to move ahead and to make the most of all ideas and human resources working these days on AI, we need both to understand where potential mobilization is taking place or it might take place, and to have a conceptual framework to move ahead.
- Since AI can produce either bad effects (harm) or good effects … the appropriate conceptual framework should offer a definition of bad/good so as to
- assess current uses of AI,
- mobilize people both “against” bad uses and “for” good uses”
- design, experiment and revise alternative “good” uses
- Sen’s concept of Substantial Freedom (A Theory of Justice) is a choice: the capability approach, where we consider “good” all moves/actions that improve the capacity/capability of every person to do what considers valuable (not futile capacities, but central to one’s life: dignity, learning, health, freedom to move, human relations, …)
- This approach does not incapsulate what is “bad” or “harmful” in a rigid box, but it calls for a participatory/societal assessment of how the general principle should be implemented, an assessment that can be continuously updated through public debate
- Not so new: it was used by lawyers in the corporate assessment of “just”/”unjust” fiduciary duties and business judgement rule in the US
- Which reminds us that the participatory process/platforms can be of different kinds:
- In the judiciary
- In working councils
- In town councils a la Barcelona
- Mixing dialogical/analogical debate
Which are the potential ACTORS of mobilization?
- Organised Labour : automation, bad jobs, gig economy, low wages
- Women: AI machism
- Kids: sexual abuses, porno
- Citizens: mobility
- Citizens: health … a dimension of life where people are becoming awareof bad uses (insurance, use of DNA)
Therefore, one should move forward according to the following sequence:
- Identify, country by country or even places by places, where people’s concern on bad uses (or on forgone good uses) is high and there it is being organised: i.e. where there is a potential demand for “competence support”, both at conceptual and technological level
- Concentrate on these contexts and provide them with the conceptual framework
- Turn the “pars destruens” into a “pars costruens”
- Building at EU level a network that allows horizontal comparison for the same issues both of threats, actions and results
- Ensuring at EU level the availability of a competence centre that deals (not necessarily solve, but ai least identify and tackles) with the meta-obstacles preventing the implementation or curtailing the survival of“good” uses.
Complementary activities that emerged from debate:
- Following the “substantial freedom” framework a code of behaviour should be established and monitored, by which all categories playing a role in developing algorithms - scientists developing Algorithms, politicians supporting/calling for their use, public and private administrators using them - should ask themselves: What is the effect on the capabilities of people effected by the algorithms?
- Analogic/dialogic …. Online/Offline Experimenting platforms combining the two is a priority. In particular, it should be a practice for public administrations and politicians to create dialogic platforms where the above code of behaviour is checked.
- Education/sensibilization on the implementation of the above conceptual framework in order to identify good/bad uses is a priority. It should first of all engage kids from a very early age: a new saga of cautionary tales … would help.