If we want to shape the common sense five or ten years from now what will be considered relevant?
What will general principles about what we consider good or bad look like or work in practice?
I look at automation and documentation at violence registration, do universities offer support system in trauma informed interface, your claim will go through it, run it through all other cases, and then match it with all the other cases…
I have problems explaining the problems these problems have. What these systems are doing, they are trying to categorize our experience of misconduct in binary options.
You have to have a clear definition of rape otherwise it doesn’t fit in the system
People are shoe-horned into particular categories, but this doesn’t speak to the richness of the experience you have had.
With criminal justice we see how people are categorized. Wider categories of human improvement are not classified.
This is where the question of civil society is very important, its difficult to understand how AI makes decisions, categorizes us, and we can’t anticipated. We don’t even know where the data comes from and who it is being sold to.
People don’t have the tech literacy to question this. Should citizens have this knowledge?
What harm is and what freedom can be?
What is something you are struggling with? What is out there to make our own biases less visible to the algorithms?
It is really easy to respond to this, oh, maybe if I only share part, than that could be a meaningful intervention. Sometimes it can be. Mozilla for example has add ons. We can do that individually.
But maybe the scope of our change should go beyond individual actions.
Content moderation in books by Mary Gray or Sarah Roberts - tracking the work on how tech companies more their content moderation work to India for example, the workers are paid at the low rate and required to impose western standards on what the behaviour on social media should look like. The norms are being translated by the economic models of the platforms
The scale is so much bigger than us.
What are some of the lessons that the civil society has done in the past looking at the ways we can translate this
How can we get emotionally engaged into the ai discussion? What do you want to push against? Where are injustices related coming out of this?
a group in LA - Stop LAPD surveillance, automating inequality, ranging from criminal justice, homelessnes…to uncover the ways in which algorithms are used for surveillance. People got together to demand that the police releases surveillance policies. They were able to uncover the full scope, pushing back against the LAPD. Intervention takes a long time and needs a lot of work.
in terms of mobilization and justice, perception of the actions and reactions of justice. We’ve seen the same thing playing out in different places for different reasons. Obvious injustice - the link between the situation and the AI land where this is scaled up. What happens when it scales up?
Why are we putting the wellbeing of people behind proprietary systems?
There’s an example of care in the states, where due to an algorithm peoples care was reduced from 8 to 4 hours. But they didn;t want to explain how the system worked. But they were violating the rights of these people.
Another state in the US: employment benefits - combining two different data sets, the system wasn’t able to sync the system together so people weren’t able to receive unemployment benefits. Companies will always hide behind the trade secrecy, but why the state considers this should be above human rights?
If actors say they need algorithms: why?
What are the reasons people are unable to act upon algorithmic issues in society?
Where is the space in which humans can have agency?
we need to ask different questions, we need to ask a fundamental question: why are we buying into these systems and who hold the powers. Individual agency is extremely limited, so we need to look collectively at taking action.
There is not going to be a single collective that will resolve this.
We have a lot of things like roads with are public, you can’t make a private road and make it public. But for some reason, we don’t have the same idea about data, and why not?