Who Should Be Responsible when a Moral or Political Failure Occurs Because of Algorithms?

A Conversation with Dr. Annette Zimmerman

A while ago @Leonie interviewed Dr. Anette Zimmerman, an analytic political philosopher and ethicist at Princeton University’s Center for Human Values (UCHV) and Center for Information Technology Policy (CITP). Annette’s current work explores the ethics and politics of algorithmic decision making, machine learning, and artificial intelligence. This post presents the highlights from their exchange.

Annette’s main focus has shifted to writing academic papers on algorithmic injustice and its scope for a while now. She feels that it’s important to understand exactly what algorithmic injustice entails, and only then, it’s possible to find a morally and politically sound solution. She also wants to research the lesser-explored concept of the lifetime of algorithmic bias and its development.

The highlight, though, among the plethora of things she’s working on, has to be her upcoming book, The Algorithmic is Political. It’s a short book surrounding the political and moral decisions that are deeply intertwined with technology. It involves deep discussion on who should be responsible when a moral or political failure occurs due to this interconnection. Suppose algorithmic bias is a problem, then who should be responsible for fixing it? Is it better to leave the automated decisions from AI in the hands of a person? What happens if that person’s judgment is clouded by their personal bias? The idea of democratizing AI also comes up in this book, along with the question of whether that would change algorithmic injustice.

Dr. Zimmerman’s interest in combining algorithms with social issues came from her personal interest in democracy, politics, and moral dilemmas at a broader level. She has been pondering how it’s possible to determine who will bear the brunt of making the wrong algorithmic decisions in a society where equality and fairness is valued.

In general, the tech industry has become incredibly conscious of the idea of ethics when it involves AIs. Annette has particularly honed in on what should be the basis for determining the ethics the tech industry should follow rather than what the ethics should be. People fall prey to the idea of values being completely subjective, so the idea is, no one can make an objective value system. Or some people think certain values are always right, because that’s how most people see it. Instead, the values a domain has to follow should be determined on a case-by-case basis.

People also need to be open to the idea that making the wrong choice is inevitable, but constantly questioning those choices can keep the damage to a minimum. According to Annette, the central figure that AI ethics should protect is those who get negatively affected by the gender or racial bias shown by the algorithm due to the personal bias of those who make those algorithms.

Zimmerman refuses to agree with the pessimists or optimists of the tech world. She doesn’t think that if an AI-based society is inevitable, people should just resign themselves to its judgments. She isn’t of the opinion that an AI would always be right. She doesn’t subscribe to the extreme view that the existence of AI will always turn out to be bad, either. Instead, she believed in a critical approach, where AI’s purpose is always questioned and examined if a better society is needed.

Progress has been made in improving algorithms from a mathematically fair point of view by FATML — Fairness, Accountability, and Transparency in Machine Learning. Yet, Annette thinks a world with a history of injustice can’t be treated equally by relying solely on a technological standpoint. Social and political implications have to be considered no matter what.

She’s deeply aware of the conscious choices that go into designing those algorithms in the first place. As a result, it can be heavily biased, failing to take into context the people the algorithm is judging. These concepts will always come with moral dilemmas — but the solution is to face those problems rather than ignoring them. Otherwise, the algorithm will always be unable to represent the real world.

Lastly, she concludes that AIs aren’t some interdependent creatures shown in dystopian movies as the general public seems to think. Instead, they’re domain-based tools reliant on humans, and so, the accountability falls on the humans who make the decisions — including the developers and the government who often uses it. Previously, she has claimed that some AI should never have been deployed. She elaborates on it further by stating that a society, already biased against certain protected individuals, will be oppressed further by coming in contact with those AI systems.

Her main argument is that the tech industry individuals need to start realizing the social and political choices they make while creating those algorithms, whether they want it or not. One has to be conscious of the ethical decisions they make before an AI is deployed. They should also be open to changing the ethical posts once it’s revealed the decisions might have certain roadblocks they don’t foresee. The ethical critiquing should be a regular business rather than a one-time gig.

Join the conversation

This post is part of our preparations ahead of our November 29 event on the role of Internet Technologies in time of crisis. Welcome to join the event, the format is quite different (think a card game you can play at the pub) rather than a conference. Info & Registration here .