Summaries of 10 stories on topics relevant to AI & Justice workshop

This wiki contains summaries of 10 depth interviews and conversations on Inequalities in the age of AI, what they are, how they work and what we can do about them. We are adding new ones on a regular basis, if you wish to be included or discuss their contents, you can reach out to me either via a private message here or write to nadia (at) edgeryders (dot) com

Overview of all 10 Interviews

Experts working at the cutting edge of tech, policy and human rights spaces call for greater consideration to be given to the way that AI is altering the very fabric of society. Both academics and policy makers agree that current AI systems risk amplifying existing human biases that entrench inequality - not least because of the widespread, and misguided, perception that data is inherently neutral. Systems for the reporting of sexual assault, for example, are encoded with developers’ assumptions about sexual violence which do not grasp the complexity of victims’ experiences. Those working and researching in the field caution that AI systems are only as unbiased as the humans who build them, and that continuing to give AI undue weight will cause serious harm to the most vulnerable in society.

Across medicine, law, entrepreneurship and gender studies, concern is growing that the pressure to innovate for its own sake comes at great risk to privacy, protection and human rights. Experts caution that companies in both the public and private sectors are rushing to implement technology they don’t fully understand, and that it is crucial due care and consideration is taken when building these systems to ensure they are value-driven and accountable. Unchecked digitisation in the fields of medicine, social work and the reporting of sexual assault fails to recognise human needs which are more nuanced, individualised and unpredictable than AI can fathom. It is critical we ensure that AI is designed and used in such a way that it serves society’s needs – not the other way around.

Social media updates for media-friendly summary articles

Self-reinforcing loop of data puts human rights at risk

Unchecked digital expansion could be a force for democratisation – or further entrench inequality

It is critical that AI serves people – not the other way around.

Discourse must move from technology to societal impact

Perception of data as inherently neutral is dangerous to society

Putting human rights at the forefront of digital expansion is critical

One-size-fits all approach to AI in public services breeds division and intolerance

Interoperability key to sustainable innovation

Meaningful debate on AI must be prioritised above innovation.


The rush to implement AI too widely and simplistically could have disastrous consequences

A conversation with Marco Manca

Marko Mana is an interdisciplinary researcher in mathematics and informational systems with an educational background in medicine. He founded the SCimPulse Foundation, which he still directs, and is also part of several scientific organisations and commissions, including the working group of NATO for human control over autonomous systems.

Marko feels there is a lot of excitement about AI and a push to accelerate its implementation widely - but that it is crucial we consider AI a “nifty tool” to use with awareness, rather than an impeccable “leader” that must not be questioned. AI systems are only as good as the data inputted and the questions asked of them by humans. This means that the conclusions returned are not free of human biases, but rather potentially amplify them. Essentially, as they are used now, AI systems simply return the same results as humans would, just “faster and dumber.”

This is a concern because of the rush to implement AI, particularly in the field of medicine. In medicine there is an expectation of precision, but with so many biological variables, the more precise you get the more you diverge, so large scale information potentially becomes less valuable. For example, in the 1970s, various tools were introduced to help doctors predict the likelihood of certain diseases, but attempts to refine these profiles over the years have hit a barrier. Just as you could play a lottery with 1/1000 odds every day for a thousand days and still not win, there is a crucial difference between “the destiny of the person in front of you right now, the destiny of every similar person.”

His argument is not that we should not be developing AI, but that we must consider how we develop and implement it and how we contextualize the information it gives us. If we simply scale up the information we work with now without being informed about the risks, we risk causing serious damage.

Participate in the conversation with Marco here: What does it take to build a successful movement for citizens to gain control over when, how and to what use automated systems are introduced and used in society?



AI is changing the fabric of society - it is crucial to ensure it serves people, not the other way around.

A conversation with Seda F. GĂźrses

With an undergrad degree in international relations and mathematics, Seda is now an academic at Delft University, focussing on how to do computer science differently - utilising a interdisciplinary approach to explore both concepts of privacy and surveillance, and also looking at what communities need. They were led into this field of study by their fascination with the politics of mathematics and the biases contained within seemingly neutral numbers.

The technological landscape has changed enormously in the past few years — from static software saved on a disk that was only updated every once in a while, to software and apps that are held on services and so constantly updated and optimised. A whole new host of privacy and security issues have arisen, and thus the need for a computer science which secures and protects the needs of its users.

The negative consequences of prioritising optimisation over user experience can be seen in Google Maps and Way, which sends users down service roads to avoid freeway traffic. They don’t care that this has an adverse impact on the environment and local communities, or even that it actually causes congestion in smaller roads. Further, Uber has optimised its system to outsource risk to its workers: instead of paying people for the time they work, Uber offers them a system that tells them when they are most likely to get customers so that they can manage their individual risk.

When this kind of tech injustice is applied to public institutions such as borders and social welfare systems, the discrimination embedded in the very systems mean we are changing the fabric of society without having the necessary discussions as to whether that’s something we want to do. We need to stop talking about data and algorithms and focus on the forms of government we want these technologies to have. It is crucial that the computational infrastructure boosted by AI serves people, not the other way around.

Participate in the conversation with Seda here: coming soon


Tech is not the simple solution to complex social situations.

A conversation with Peter Bihr

Pbihir co-founded ThingsCon community which advocates for a responsible, human-centric approach to the Internet of Things. Smart Cities, where the digital and physical meet and where algorithms actively impact our daily lives, is an important focal point of his work.

He proposes reframing the Smart City discourse (currently dominated by vendors of Smart City tech) away from the technology and more towards a focus on societal impact. What better urban metrics can we apply to cities increasingly governed or shaped by algorithms? Such an analytical framework would be the key to unlocking a real, meaningful debate. Smart City policies must be built around citizen/digital, human rights, and with emphasis on participatory processes, transparency and accountability.

At the most recent ThingsCon conference, Manon den Dunnen shared her experience of unintended horrific consequences of tech going wrong when police officers take phone numbers of both victims and suspects, and then Facebook algorithms then suggest one another as friends.

Further, several studies have shown policing and/or justice related algorithms were found to have racist data points (including some deemed illegal by courts yet remained in the data sets). And the policing algorithm in NYC measures effectiveness by such simplistic metrics that created incentive for officers to report selectively (for example, the systemic intimidation of rape victims to change their charge from rape to a more minor offence).

Participate in the conversation with Peter here: How can we put humans/citizens first in our smart city policies?


Unchecked digital expansion could entrench existing biases and power inequality

A conversation with Corinne Cath-Speth

With a background in human rights and policy, Corinne Cath-Speth worked as a policy officer for a human rights NGO in London before coming to the Oxford Internet Institute and the Alan Turing Institute to pursue her PhD. Her research focuses on human rights advocacy efforts within Internet governance, with a broader interest of how human rights NGOs are responding to the new (and old) challenges raised by emerging technologies.

In working with human rights activists, CCS saw that digital technologies - like social media - can give the plight of activists more visibility, but that often these same technologies entrench existing power inequalities and biases. She became interested in studying what happens when activists try to change the infrastructure of the internet itself, rather than simply use it. A number of well-known human rights organizations like the ACLU and EFF, actively do so by contributing to Internet governance fora. She found that these organizations are welcome and can operate in these spaces with relative ease, given their open and multistakeholder nature. At the same time, she also saw that while getting the tech “right” is an important part of the puzzle of human rights advocacy in the digital age, it is also a narrow frame through which to understand the broad spectrum of social concerns raised by networked technologies.

CCS’s work in Internet governance also led her to consider human rights advocacy in AI governance, as AI systems are raising a host of questions regarding privacy, safety, anti-discrimination and other human rights. One of the problems with developing AI advocacy programs is that many of these systems are developed by private companies, so it is difficult to gain access to their technology to examine and understand it. Many NGOs are therefore calling for the regulation of AI systems, but are facing pushback, with companies arguing that it hampers innovation. Yet, it is this same “innovation” that encourages many governments to deploy AI systems. A drive for “innovation” for innovation’s sake is particularly concerning when it encourages governments to step into technologies that they don’t fully understand or even need.

Obviously, a lot of human rights NGOs have been worried about these various dynamics for a while and are consistently raising their concerns— sometimes by bringing in academic work to show some of these issues. Human Rights Watch, for example, has a great program as does Amnesty International, Privacy International and Article 19. Several of the largest human rights NGOs are focusing on issues of AI systems and bias. But they’re also forced to play whack-a-mole as the application of AI systems becomes more common. How to focus your resources? Which companies and applications are most concerning? Which solutions most tractable and comprehensive? Do we need sectoral guidelines, or do we need guidelines which focus on impact? Do we need self-regulatory ethics frameworks or hard data protection frameworks? All of the above? These are the issues I see a lot of NGOs grapple with and are questions I hope to discuss with you on this platform.

Participate in the conversation with Corinne here: What does the future of civil society advocacy look like, given the prevalence of these digital technologies and their impact on the work that civil society is currently doing?


Self-reinforcing loop of data could put human rights at risk

A conversation with Justin Nogarede

Justin Nogarede works for the Foundation for European Progressive Studies, was previously at the European Commission focussing on competition law and European regulations.

As a trainee in the application law unit at the European Commission, he became aware of the issues involved in ensuring member states comply with EU law, finding that often there isn’t the staff or resources available to enforce directives - for example, the directive on data protection has existed since 1995, but was not widely enforced. Justin now focuses more on data governance, and is finding that as new digital infrastructures are rolled out, they are driven by narrow efficiency concerns and are not accountable. Looking into these new infrastructures is a great opportunity to make the system more participatory and accountable - but we have to take it.

Feeding existing data into AI systems can create problems - for example, when predictive policing has been shown to drive more officers into wealthy areas, as data shows a higher rate of arrests in those areas. Data therefore creates a self-reinforcing loop. Further, digital systems often rely on a binary logic, which healthcare and social problems simply don’t fit. The key problem is that data is a simplification of the real world. Further, some AI systems may support a conservative bias, such as when they are used to predict which offenders are most likely to reoffend.

Regulation of digital infrastructure would be a step in the right direction, and the argument that it would stifle innovation is weak - technological advances must make sense and make lives better. It may not be possible to have 100% compliance, but more involvement of public authorities (even at local level) would be a good step, as would more transparency over how these technologies function. Why is all this innovation not channelled into ways for people to live a better life?

Participate in the conversation with Justin here: Why is all this innovation not being channeled into ways for people to help them live a better life?


Distributed systems promise great possibilities - and challenges.

A conversation with Hugi Asgeirsson

Earlier this year, Hugi was in Berlin for the Data Terra Nemo conference, focussing on decentralised web applications which are hosted without traditional servers, allowing for a lot of interesting applications.

They were inspired by the human-centric community that has grown up around ‘gossip’ protocols like Scuttlebutt. It seems to be forming a playground where new and radical ideas can be tested and implemented. The original developer of Scuttlebutt, Dominic Tarr, describes his MO as: “not to build the next big thing, but rather to build the thing that inspires the next big thing, that way you don’t have to maintain it” and this seems to have set the tone for Scuttlebutt itself.

One of the core elements of Scuttlebutt is that users can host data for other people on the network without being directly connected to them. This has the positive effect that users in countries where internet usage is highly restricted can connect via other users - though on the other hand, this also means that users could unwittingly be hosting information they would rather not propagate. There have been instances of the Norwegian alt-right using Scuttlebutt to communicate. Scuttlebutt has been working to address this issue but solutions are imperfect so far.

The bottom line is that distributed systems such as Scuttlebutt are both democratising and empowering and they come with a whole new set of possibilities and challenges.

Participate in the conversation with Hugi here: Data Terra Nemo: First report & Scuttlebutt


Open-source technology and interoperability are key to sustainable innovation

A conversation with Oliver Sauter

There is an accepted “law” of entrepreneurship: in order to build something valuable, you have to be ten times better than what already exists (as proven by Google + which arguably had better some better features than Facebook but failed to tempt people away). Why does this law exist and what can we do to change it?

BlackforestBoi believes it comes down to: Costs to switch (time and mental effort) > additional benefit offered *~10. Such growth requirement has produced some great leaps in innovation - but these come with downsides. How would the world look if we focused more on incremental innovations?

What is holding us back? The way companies make money for one: the 2nd quarter of 2018, Facebook lost $120bn (billion!) in stock value within 48 hours, the biggest loss of any company in history. The reason: It posted the least growth since its founding, while still making 5BN in profits the same quarter and growing by 42% since the last year. Secondly data and social lock ins creates counter incentives for interoperable formats which would make it easier for users to migrate between services or integrate them,

Breaking this dynamic would require tackling the problem from multiple angles: namely allowing users to move freely between services, creating economic models that reward quality of service rather than simple growth, and ideally adopting Open-Source software to allow companies to build on one another’s work.

WorldBrain (dot) io is building open-source software in an attempt to enable incremental innovation, the foundation of which is Memex - an open-source privacy tool. Interoperability is baked deep into the core of Memex, it is fully open-source and WorldBrain (dot) io has no stock value, so is entirely focussed on building a sustainable service.

Participate in the conversation with Oliver here: Startups’ grand illusion: You have to be 10x better than whats there



Civic organisations are key to turning AI technology into a force for civil justice

Fabrizio Barca

With a varied background in banking and treasury and more recently inequalities and justice, Fabrizio previously worked at the EU and now with a civic organisation called the Forum for Inequality and Diversity.

The current crisis in Western countries is driven by inequality - in particular the paradox that we have the technology to create equality but instead it is producing an unprecedented concentration of knowledge and power in very few hands. This must be addressed by putting political pressure on the issues. State-owned enterprises and collective platforms where people can put together data that everyone has access to could turn these technologies into forces for social justice.

A one-size-fits all approach to AI in public administration grows resentment. It deprives people of the most important thing - human connection and a sense of being recognised - which breeds intolerance, division and a loss of trust in democracy. Fabrizio is just coming to understand how effective civic organisations can be, not just in advocacy work but in taking action to shape services in local areas. He hopes to gain even more understanding through discussing the topics with a mix of people from different backgrounds.

Learn More : Conversation with Fabrizio Barca Founder, Forum on Inequalities and diversity I Ex General Director, Italian Ministry of Economy & Finance


Kate J Sim

A PhD researcher at Oxford Internet Institute, katejsim studies the intersection of gender-based violence and emerging technologies. Her work focuses on issues of trust, gender and sexual politics, and the double-edged role of technology in facilitating connections but also targeted harassment. While organising against campus violence she personally experienced cyberharassment and lack of support from law enforcement. More resources have become available since then, but we need to change how we conceptualise these issues and fundamentally change the design of the platforms.

She helped to form a cross-campus network that grew to a nonprofit organisation, Know Your IX. The space requires better structures in place to support mental health and protection from cyberharassment to reduce burnout. Research shows again and again that women, especially women of colour, tend to self-censor and reduce their visibility in order to survive - it is crucial we put more safeguarding in place to protect them.

Digital systems designed to facilitate disclosures, collect evidence and automate reporting of sexual assault are attractive to institutions because of their efficiency - and to some extent to victims as they are perceived to be objective and neutral. However, these systems have bias encoded in them. The designers are working with their own understanding of sexual violence, which may not match victims’ experiences. Some victims don’t have the data literacy or English level to work the systems, which could compound their trauma. Further, the pressure to report is encoded into the design of these systems, but this is a misguided emphasis on a single optimal solution, which is not appropriate for all victims. De-emphasising reporting and focussing on “small data” driven by relationship building can create a structured conversation which is rich, insightful and telling.

Rather than asking how tech can be fixed for the better, the more urgent and important question is: who and what are we overlooking when we turn to tech solutions? How can we support practitioners in anti-violence space, like social workers, jurors and judges, and advocates, with data and tech literacy, so that they have control over how they interpret and act on data?

Participate in the conversation with Kate here: Can tech design for survivors? How sex, violence, and power are encoded into the design and implementation of data/AI-driven sexual misconduct reporting systems



Focus on scalar indicators is driven by need to describe reality, not change it

Alberto Cottica

Alberto had been hoping that ehnography-at-scale via SSNA could integrate, if not replace, the indicator paradigm, but after trying to get people to assess their own willingness to pay for, say, avoiding the extinction of some type of frog in Madagascar, or lowering the PM10 content by 10%, he found convincing evidence that we could never trust our results

He refers to James Scott’s convincing argument for scalar indicators being propelled by the modernist ideology that underpin the coalescing of modern states. This works for states but not so much for people. Modern states have created a demand for scalar indicators, but this has more to do with their thirst for administrative order than with a drive to understand what is really going on.

Participate in the conversation with Alberto here: On assessing impact, and what Edgeryders could do in that department

1 Like

@MariaEuler the idea here is not to ping people who already have contributed alot of time. This is for the outreach team to draw people in to the content already there…

Understood. If you are referring to the discussion about posting parts of the longer interviews, I am fully aware that that are fully different goals. This is outreach. The other one is for potentially making engagement on the platform easier :slight_smile:

Let’s put it this way. We have a social contract not to take up too much of people’s time. let me handle this if you don’t mind?

@inge can I ask you to create a series of separate posts with one summary from the list above copy-pasted into it, in each one pinging the person who is featured in that post?

1 Like

morning @inge did you ever get around to doing :point_up_2: btw ? Not a big deal if not, jut want to avoid duplicating work

Yeah, I did:

1 Like