What does the future of civil society advocacy look like, given the prevalence of new internet technologies and their impact on the work that civil society is currently doing?

With a background in human rights and policy, Corinne Cath-Speth worked as a policy officer for a human rights NGO in London before coming to the Oxford Internet Institute and the Alan Turing Institute to pursue her PhD.

Her research focuses on human rights advocacy efforts within Internet governance, with a broader interest of how human rights NGOs are responding to the new (and old) challenges raised by emerging technologies.

In working with human rights activists, @CCS saw that digital technologies - like social media - can give the plight of activists more visibility, but that often these same technologies entrench existing power inequalities and biases.

She became interested in studying what happens when activists try to change the infrastructure of the internet itself, rather than simply use it.

A number of well-known human rights organizations like the ACLU and EFF, actively do so by contributing to Internet governance fora. She found that these organizations are welcome and can operate in these spaces with relative ease, given their open and multistakeholder nature.

At the same time, she also saw that while getting the tech “right” is an important part of the puzzle of human rights advocacy in the digital age, it is also a narrow frame through which to understand the broad spectrum of social concerns raised by networked technologies.

CCS’s work in Internet governance also led her to consider human rights advocacy in AI governance, as AI systems are raising a host of questions regarding privacy, safety, anti-discrimination and other human rights.

One of the problems with developing AI advocacy programs is that many of these systems are developed by private companies, so it is difficult to gain access to their technology to examine and understand it. Many NGOs are therefore calling for the regulation of AI systems, but are facing pushback, with companies arguing that it hampers innovation. Yet, it is this same “innovation” that encourages many governments to deploy AI systems.

A drive for “innovation” for innovation’s sake is particularly concerning when it encourages governments to step into technologies that they don’t fully understand or even need.

Obviously, a lot of human rights NGOs have been worried about these various dynamics for a while and are consistently raising their concerns— sometimes by bringing in academic work to show some of these issues. Human Rights Watch, for example, has a great program as does Amnesty International, Privacy International and Article 19. Several of the largest human rights NGOs are focusing on issues of AI systems and bias. But they’re also forced to play whack-a-mole as the application of AI systems becomes more common.

“How to focus your resources? Which companies and applications are most concerning? Which solutions most tractable and comprehensive? Do we need sectoral guidelines, or do we need guidelines which focus on impact? Do we need self-regulatory ethics frameworks or hard data protection frameworks? All of the above? These are the issues I see a lot of NGOs grapple with and are questions I hope to discuss with you on this platform.”

Participate in the conversation with Corinne here: What does the future of civil society advocacy look like, given the prevalence of these digital technologies and their impact on the work that civil society is currently doing?

Join our Workshop on Inequalities in the age of AI, what they are, how they work and what we can do about them. Registration: https://register.edgeryders.eu

More info: https://edgeryders.eu/t/workshop-on-inequalities-in-the-age-of-ai-what-they-are-how-they-work-and-what-we-can-do-about-them-19-11-brussels/10326/60

1 Like