What does the future of civil society advocacy look like, given the prevalence of these digital technologies and their impact on the work that civil society is currently doing?


Hello world

I’m a PhD student at the Oxford Internet Institute and the Alan Turing Institute which is the UK’s national institute for data science and artificial intelligence (AI). I’m a cultural anthropologist by training and my background is mostly in human rights and policy. Before coming to Oxford to pursue a PhD, I worked as a policy officer for a human rights NGO in London, where I worked for their digital team. We focused on understanding the intersection of Internet infrastructure and human rights, most of our work involved participating in Internet governance organisations and was focused on the right to freedom of expression, access to information and privacy. Before that, I worked at the US House of Representatives, as a policy advisor on telecommunications and European affairs, for a democratic congressman.

I am primarily interested in understanding how human rights NGOs are adapting to the increased importance of digital technologies, and how they are shifting their work now that much of it has a digital component. For example, I have NGOs focusing on providing shelter and safe houses for survivors of domestic violence have to reckon with the increased ability of abusers to surveil and track-down their partners and children using digital technologies. Or how police or immigration enforcement use AI systems that can lead to over-policing of communities of color or otherwise lead to bias and discrimination of minorities. There are many concerns, in many directions, involving many different technologies.

The relationships between technological changes and activist mobilisation?

My PhD research focuses specifically on a group of NGOs that are trying to change the internet standards and protocols to align them more with human rights by directly participating along-side engineers of big tech companies. But more broadly speaking, I am interested in the question of how technology changes or keeps the same, the work that human rights NGOs do and how they adapt to changes in their work brought by digital technologies.

I’ve always been interested in the interaction between technology and society. When I was a bachelor student at the University of Utrecht, studying for my BA, one of my main interests was the role of social media mean and how activists mobilize. This was at the height of a number of social uprisings. Social media, and other digital technologies were often held up as revolutionary tools that would enable people to topple their governments and do all of these things. But what I found in my work was that this is just simply not the case. Social media can certainly give the plight of activists more visibility. But at the end of the day, the power, the power inequities that exists, are not inversed using these technologies. I actually found that instead of having a liberating potential, digital technologies can also entrench existing differences. Even if people feel they have more agency. And that combination is not necessarily a good thing.

“Increasingly, human rights organizations are trying to target companies and code instead of governments and laws”

When I started doing more research, I became interested in the inverse of what it means for civil society to use social media. Instead of looking at what happens when activists use social media, I was interested in what happens when activists try to change how social media functions or try to change how the underlying internet is built. Instead of looking at use, I started looking at design, and the interventions in the internet infrastructure made by human rights activists.

An example of this is the work of the American Civil Liberties Union (ACLU), which regularly participates in an organization that sets Internet standards. Internet standards are something that we use every day. For instance, when it says HTTPS in your web browser, that’s a standard, it’s a standard that ensures whatever you type into your browser is encrypted. If you are looking for a particular medicine, or if you are looking for access to abortion, and you’re doing that in a country where it is illegal the HTTPS protects you (somewhat) from being snooped on when typing in search queries on a website. And so, one of the tactics of the ACLU is to go to these companies that build these things and say: "Hey, have you maybe considered encrypting Y. Or making sure X leaks less data?" They try to bring a wider perspective on the social impact of protocols, one that is not necessarily included in the technical discussions when solely led by industry.

Another example of an NGO intervention aimed at tech-companies: A couple of days ago, almost 120 civil society organizations signed a letter urging Facebook, not to give governments back-door access to Facebook Messenger, and to ensure there it had strong encryption. Increasingly, human rights organizations are trying to target companies and code instead of governments and laws.

Case in point: privacy and law enforcement access

This debate is complicated. When it comes to the fine line between privacy and law enforcement — I’m not sure I would be doing a PhD if I had the perfect answer to this question of how to solve this stalemate — but I think there are a couple of things to disentangle.

First, there is a…duality — I don’t want to call it hypocrisy — in how some of these governments portray themselves or how some of these governments develop their positions. For example, the Dutch government: on the one hand Internet freedom is a major part of Dutch foreign policy, they were founders of the Freedom Online Coalition (FOC). And yet, at the same time, the government strategically pushes against strong encryption, prioritising the provision of access for law enforcement over privacy/encryption concerns. Depending on which debate it is, they take contradictory positions. It’s easy to say: oh, countries in Europe take privacy seriously, they take freedom of expression seriously. And they do, but not consistently. It’s important to be aware of these contradictions.

Second, regarding law enforcement concerns that they are going dark: There was a report a couple of years ago from the Berkman Center for Internet Society at Harvard University called Going Dark. It’s one of the main examples that is often given by these governments, especially law enforcement. They say, Oh, you know what, we’re going dark, we don’t have access anymore. But we need that kind of access. And for that access, we need you to weakened encryption. And what the report actually shows is that the enforcement officers have more information than they’ve ever had before. They have more ways of combining different data sets, more systems to systematically go through it. This means that their argument is not always valid, and it’s important to question the rhetoric upon which arguments in favor of weakening encryption are based.

And this is also something that the human rights NGO I work with stress: you don’t necessarily solve the problem of lack of access by weakening encryption. The argument they often make is, that you don’t protect your house from robbers by building a backdoor for the police, which can subsequently be used by everyone. Because that’s the thing, if you create a backdoor for police, you create a backdoor for everyone. That’s just how the technology functions. Those are some of issues and values we need to consider when we try to assess this debate.

Civil Society Advocacy aimed at AI systems and Companies

There’s this running joke amongst academics that you say “I do AI” when you want to get funded, and you say “I do Machine Learning” when you want to be taken seriously academically.

In other words, there’s a definition bloat in what people mean when they talk about AI or AI systems. And I think that leads to a lot of confusion. But this conceptual confusion does not make the impact of these systems, however defined, any less real. And this is also what a lot of the recent academic literature shows — that the application of AI systems can have real negative impacts on civil liberties and human rights.

A prime example is when LinkedIn got into a lot of trouble because their systems tended to show CEO positions consistently to white men rather than to women or people of color. The systems had learned from the existing data, which obviously shows the societal bias, that white men tend to be more likely to hold these kinds of positions. The existing bias in society, obviously, does not speak to the inherent suitability of (white) men for these positions. But that’s the kind of nuance not captured by AI systems. So, you end up in a situation where the application of these systems actually reinforces existing bias and discrimination in society, without necessarily generating better CEOs.

Obviously, a lot of human rights NGOs have been worried about this for a long time and have consistently tried to ring that bell — bringing in academic work to show some of these issues.

Human Rights Watch, for example, has a great program as do Amnesty International, Privacy International and Article 19. Several of the largest human rights NGOs are focusing on issues of AI systems and bias. But they’re also forced to play whack-a-mole as the application of AI systems becomes more common. How to focus your resources? Which companies and applications are most concerning? Which solutions most tractable and comprehensive? Do we need sectoral guidelines, or do we need guidelines which focus on impact? Do we need self-regulatory ethics frameworks or hard data protection frameworks? All of the above? These are the issues I see a lot of NGOs grapple with.

Sometimes it is easy to draw a position: for example, the campaign against killer robots (lethal autonomous weapons) has brought NGOs together to formulate a clear policy goal using the international law. But not all issues are so clear-cut. And not in all cases are the developers of technology approachable or accountable.

Very often, once a bit of research has been done, it is easy to show the effect of certain technologies. But part of the problem is that many of these AI systems are developed by private companies. This means it is hard to gain access to what their technology does specifically. And a lot of these companies say they don’t have to explain how their tech works, as it’s their commercial secret sauce and obviously, they say they don’t want to share how that works, because that undermines their business model. In other cases, it is hard to pin-point the "disparate impact" of a technology, even if people on the ground can feel its ramifications in terms of access to services and goods (like government support, housing, the legal system etc.).

This concern, in theory, should not apply when it’s government use of these technologies. But even in those cases, when they buy “AI solutions" from companies, they don’t always fully understand what they’re buying (into). They don’t, or sometimes can’t, know how these systems works. So, there are a lot of issues around trying to get not just transparency, because transparency in and of itself isn’t the solution, but also to get a real sense of accountability for how the systems function.

Many NGOs are calling for regulation of AI systems. But the companies selling these systems are pushing back on regulation, under the guise of innovation. At the same time, regulators are struggling to articulate what regulation should be applied to: the AI systems, the data sets, their combined impact? Should we repurpose existing legislation or develop novel approaches? What about the gap between regulation development and implementation? We have the GDPR and yet I still get spam. This is not to say we shouldn’t pursue regulation; it is to say that the discussion needs to include effective implementation too. These are the often dull but necessary bits of work many NGOs are focused on.

And then there’s the "minor" question of trusting regulators to be able to understand what is, and what isn’t good use of it. Or even to trust them to have the best interests of people in mind in the first place. Not a given.

I know people have been talking about making sure that the data sets are polished in a way to be more representative of society, or that they include more diversity. But that means we are still moving towards societies in which, some academics like Os Keyes, Kate Crawford and Alondra Nelson argue, we follow the questionable logic that more countable = good. Or that seen by the state = equality before the law. Multiple academics have argued that inclusion in a database does not mean equitable treatment by the state. Hence, there are plenty of examples which suggest bias in AI systems cannot be resolved through expansive inclusion of minorities in datasets.

It’s not just America

It is easy to dismiss these concerns as distinctly American. But, let’s keep in mind that Europe is not the be-all-end-all of good technology regulation. As much as we have good policies, we also have terrible policies, like the upload filters or the Google “right-to-be-forgotten”. Those are not a good examples of what solid tech-policy looks, at least not to many of the NGOs I work with. So, instead of saying, we have a responsibility to use our EU regulatory blueprint and impose that on other places — as is often said — we should be humble. Because we, as Europeans, would be incredibly resistant to the inverse of that. Actually, we are incredibly resistant to it. As can be seen in how we respond to the enforcement of the American blue-print of freedom of expression and (lack of regulation of) hate speech forced upon those by the ubiquity of American Internet companies.

As such, when it comes to exporting our approach, I would be more inclined to say, let’s provide an alternative example. Let’s just say “this is how we make the balance between these different values.” And what we’ve seen, for instance, with the GDPR, is that that did kick off a bit of a trend across the world where people are using it to model their data and privacy regulations on. And I think that sort of leading by example, is a much more effective. Anything short of that can be seen as imposing our way onto the world, which considering Europe’s colonial history is not a strong look.

Regulation, Civil Society, and Limits

Some companies argue that regulation hampers innovation, while at the same time arguing that real innovation is not impacted by regulations. So like, bro… which way is it? Does it matter immensely, or does it not matter at all? Because it certainly can’t be both at the same time. I am not trying to be facetious here, just trying to tease out the contradictions in Silicon Valley rhetoric about tech-regulation. Doing so - allows for a real discussion to take place. Okay, you believe in innovation. Great. But what does that term mean to you? But if innovation to you means “move fast and break things” or “disrupt existing industries” (with little regard for the long-term consequences) than don’t be surprised if people outside of industry are not going to be on board or impressed.

Case in point: Data & the Status quo

Databases are, by and large, systems reflective of our bias. That’s also what the academic research shows. It suggests that whenever presented with an AI or automated or machine learning or whatever the term-du-jour is, you always need to look for the humans. We tend to think of these systems as being without humans, all automated. But often they are not. Academic like Sarah Roberts, Mary Gray and Siddharth Suri have shown the human face of AI systems. And the same applies to the algorithms that are applied to the data: humans make a particular decision about how to weigh bits, and which part of it. Often, as Seda Gurses, has shown this is done to optimize towards metrics that are in the interest of the company rather than the consumer.

People are fast to judge technology critics as Luddites. There are undoubtably useful applications of AI systems. But those are only going to arise if the right regulatory and economic context is in place. I mean, I’m very wary of moving towards a world in which we say “these technologies are completely free of humans are completely free of bias, and hence, they are more trustworthy than us.” Because there is no such technology, as technology is always the product of humans. And I think it is important not to lose sight of that.

How can we develop other ways that we can ensure that there is a better understanding of how these technologies work? There are a lot of interesting research of academics who do really good work specifically in that field. I mentioned some above. I think it shows that we should not buy into the hype: that it so incredibly complicated that we could never potentially figure out how it works. AI is often a word for a system that learns recursively based on large data sets using complex statistics. It is not, as academics like MC Elish and danah boyd have argued, magic. Let’s just call it call it for what it is, call a spade a spade. The problem is that there is not a lot of either political or economic interest in doing so.

Likewise, let’s be aware of what a drive for “innovation” does. Because this is what concerns me. I’ve written about it with Roel Dobbe who is a Postdoctoral Researcher at the AI Now Institute at NYU. Our government (The Netherlands) in particular, but others as well, have this sort of irrational fear of being seen as falling behind on technology and technological developments. So they’re like, “Oh, we must we must apply the AI in the cloud using IoT.” What does that even mean? When cities want to put things on blockchain, they should ask themselves if they need anything more than a spreadsheet? Why would you want to have an immutable spreadsheet spread out over multiple computers, for whatever simple thing you’re trying to do to attract tourists? I get it, it sounds cool. But that’s also my tax money that’s being used to work with technology that I don’t think is useful, and that I’d rather see you put into education, or health care or anywhere else where we could improve rather than disrupt. In addition to the fact that you have these sort of perverse economic incentives on the part of the company’s not to reveal how their systems work, there’s also this innovation myth which encourages governments to take steps to include technologies that they don’t fully understand or might not even need.

AI Systems are Dull and Humans are Inconsistent (luckily)

I also have a pet-peeve about the dullness of these systems as they are commonly applied. The logic of these AI systems is that they give you recommendations based on your past behavior. But humans change. Please don’t give me fashion advice based on what I would have liked at 16. I have evolved, thank God. Yesterday, my husband was sitting at the dining table, looking at Amazon trying to buy something. And he was like, “this stupid website, keeps recommending me socks.” Because he bought socks once. “I don’t need more socks!” he said. The logic behind it is, “Oh, you like this thing. And hence, you must like more of this thing,” right. But for some of these things, you don’t need more than one, such as an electric toothbrush, I don’t need five of them in different colors. I need one. Right?

Sometimes it does work a little bit better. I find it really useful with academic books. I look for a particular book, and it says people like you, or people who’ve bought this book, also liked this book.” I’ve actually found books that I had not yet come across. But then again, for instance, for music, is a good example of how these systems are simply to flat for human needs. Because as soon as you have a little bit of an eclectic taste in music — one day you listen to Rihanna, and the next day you listen to Mozart — with the recommendations there is no in-between. And so you either get a lot of the same of one thing, or a lot of the same of another thing, but it doesn’t actually capture the nuance of human taste.

And I think that is one of the problems with the systems in general, they don’t get they don’t capture the nuance of the complexity of human life.

Local Knowledge first

What should we do? What is the role of academics and human rights activist in Europe, given these various complications I laid out? I’ve worked with a number of NGOs on security methods and I worked with their safety teams. When I was living in Brazil, I worked with an NGO that specifically focused on digital security trainings for human rights activists. On the one hand, they have limited capacity to engage with all of the latest and greatest technological developments. On the other hand, they have a lot of incredibly local knowledge we tend to not think about naturally, because it doesn’t come up in our context.

A great example is a friend who’s a very well-known activist. He knows that the police are monitoring him, that he’s under surveillance, and so are his family and friends. And so one of the things that we did is setting up this elaborate encrypted emailing service for him, explained VPNs, and helped him use signal. And then one morning, he came in, and he said, you know, this is all incredibly great. I now have all of these encrypted services. But you know, what happened this morning. I was sitting on my motorcycle, texting someone, a police officer walked up to me, and just grabbed the phone out of my hand.” Which meant that it was open, which meant that no matter the encrypted services, no matter the signal, no matter the VPN, they had access to his phone. We didn’t discuss this scenario because for most of the trainers, it simply hadn’t occurred to them that this might be how the police would act.

I have seen the tendency of European trainers who go abroad with their really elaborate technologies for improving your safety. But they don’t actually address what a security risk looks like on the ground. So, I think one of the things that I would be really interested in, is actually trying to figure out whether a training on IoT is useful, and perhaps look at physical safety. Working with a philosophy where the starting point is: you know a lot about the problems that you have, so explain to us what your problems are, and then collaboratively, we can bring together a bunch of different resources to figure out what a solution should look like. Instead of sharing the technical knowledge that we have, pouring it out and then leaving. Not that everyone does that. But I do think that it is an easy mistake to make, an easy trap to fall into.

A Human Centered Internet

Setting a good example in terms of legislation is a start. And not falling into easy-finger pointing towards obvious scapegoats (i.e. well but in name-country-X everything is so much worse). There are lot issues we could work on in Europe to improve how the internet is used, and the extent to which it enables or disables certain civil liberties.

The UK is a prime example, using the internet infrastructure for things that it wasn’t built for. So, I think one of the things we can do in Europe is be a little bit less on our high horses, because we don’t always get it right. We should try to figure out that if we want to have an internet of humans, and if we want to think about a European Framework, how can we make sure we do that? Putting our values first.

What’s next: AI regulation?

I would love to say that it’s enough to just regulate these companies. And to a certain extent, you can do that. For example, Nazi content in in in Germany isn’t accessible. That’s a regulation. It’s not about what Google stands for. But it’s simply what the country has said. So, there is a regulatory component to where you say simply, technically, we don’t want this to be accessible. This happens all the time. And that’s part of the friction which you see now with the right to be forgotten.

There’s been a lot of talk about companies developing voluntary ethical frameworks for AI governance. I think this is a good first step, I always think it’s good for companies to be explicit about their values and what they stand for, and what they find acceptable and what they don’t. That being said, a lot of the recent literature actually shows that these ethical frameworks are always going to be articulated within the bones of the business logic, which means it’s never going to be radical, which means it’s never going to be anything that chips away at late stage capitalism.

So, then the question becomes “how can you make it hurt?” Financially? I honestly think, and again, I might be too jaded here, that if the companies don’t hurt financially, they won’t change. And in addition, I think there’s also the question of trust, that can be leveraged in this particular case. AI technologies have run into the same problem that we’ve seen with biotechnology. And that is that the public is worried about it, scared about it. The kind of narratives about our AI overlords, which, you know, generally is not something that I believe in or support. But it does mean that these companies need to work harder to get the public to buy their stuff. So, we need to find a way to make sure that we leverage those two things, a lack of trust in this product, as well as making sure that it financially hurts businesses not to comply with whatever regulations we set up, or whatever values we hold dear.

The question, however, becomes: what are the guidelines and what are the values? I think there’s justifiably been a lot of pushback on for instance, the guidelines that were developed by the high-level group of experts on AI. So, the question is: where does corporate capture happen? Because if for instance, like with those guidelines is not half as efficient as you would like it to be. That is why I do think corporate capture is a huge issue. And a meta concern we need to think about. And at the same time, while it is convenient to assume that Europeans have a shared set of values enshrined in national and European documents the current rise of populism and xenophobia paints a much less convenient picture. How do you make sure – in the face of so much societal unrest – to uphold some primary principles?

Burning Question: What does the future of civil society advocacy look like, given the prevalence of these digital technologies and their impact on the work that civil society is currently doing?

Post a thoughtful comment below ahead of the workshop on AI & Justice on November 19!

This will help us to prepare good case studies for exploring these topics in detail. It will also ensure participants are at least a bit familiar with one another’s background and experiences around the topics at hand.

4 Likes

Wonderful wide-ranging remarks. You have clearly thought deeply about these and many other related issues. I am this well-spoken in my dreams.

2 Likes

indeed. i have heard there are even some governments that are opening embassies at tech centers like silicon valley (denmark)

1 Like

@CCS found this piece of news from Bruce Schneier that might be of interest to you:

" Dark Web Site Taken Down without Breaking Encryption

The US Department of Justice unraveled a dark web child-porn website, leading to the arrest of 337 people in at least 18 countries. This was all accomplished not through any backdoors in communications systems, but by analyzing the bitcoin transactions and following the money:"

https://www.schneier.com/blog/archives/2019/10/dark_web_site_t.html

1 Like

If you have some follow up questions share them with me? I can ping him and ask

1 Like

@CCS and @johncoate, this is a great a long post, but kind of hard to take in and especially comment all at once. I was wondering what you think about testing to post a few of these headlines as their own thread? They are rich enough to start a discussion on there own, and maybe easier to engage with?

I do not mean to make summaries, which is already being done by some great copywriters @nadia has taken care of. I mean to take for example this part:

And post it by itself without real changes so a concentrated discussion of the interesting points made just there can be had and also so we can learn about how this form influences the engagement :slight_smile:

2 Likes

@CCS, would you like to choose 1-3 and post them on their own?

2 Likes

I think it is ok for us to do this ourselves @MariaEuler - they are summaries and not the exact words so it doesnt really make sense for the people interviewed to post it themselves…

1 Like

OK, just, that they are worded in the first person and when people answer it would also be good if @CCS could see that directly, so it would be good if she could post those :slight_smile:

1 Like

We can ping Corinne so she sees it, this in order to be mindful of everyone’s time :slight_smile:

2 Likes

Hi @nadia and @MariaEuler,

Thanks for the pings. Just a quick question, to see if I understood correctly:

You would like me to take some sections from my interview, and post them as threads on the Internet of Humans forum for people to engage with as that would be easier to parse than reading the full interview?

If this is correct, I am happy to do so. But I do want to flag this convo for @amelia and @alberto as me starting a number of threads based on my interview while also being one of the three main ethnographic coders for the Internet of Humans project does raise some interesting methodological challenges. Not insurmountable, but just something to keep track of :wink:

Kind regards,

CCS

2 Likes

@CCS, that is exactly what I meant. Good point with your double role, but maybe that can event help us to understand how format/ length influences engagement :slight_smile:

1 Like

No @MariaEuler - what I meant was that we have created 150-300 word articles summarising the contents of the interview with each person.

And that we (I) will be posting them on the platform and on social media.

So that people pressed for time can more easily engage in the conversation based on the main points we have drawn from the conversation with you.

The people who want to go much deeper will be pointed to the original post.

1 Like

But I think the point of @CCS here is that the direct physical security (or right of possession) makes a big difference: we can do encryption, set passwords, do data minimization, and all these “cyber-measures” (that are somehow immaterial), but at the end of the day what matters is who can enter your home, under what conditions and whether you can protect yourself?

It somehow draws my mind to the only Human Rights Consideration incorporated in an IETF RFC to date: RFC 8492 - Secure Password Ciphersuites for Transport Layer Security (TLS)

And also to a different topic that is still insufficiently discussed in human rights communities, in particular, namely the State of Exception (masterfully, in my view, covered by Giorgio Agamben in his homonymous book of 2005).

5 Likes

You are spot on with this. The US involvement in the Vietnam War was essentially built around a state of exception stemming from the “Gulf of Tonkin” incident, which we now know was faked in order to win Congressional approval, which importantly was not a declaration of war, but a blank check to the executive/military branch…To me the whole “greatest generation” thing about my parents WWII generation has to have an asterisk because they sent us to Vietnam, which I submit is the central tragedy of my now 68 year old lifetime. And the US has never atoned for it and indeed we are repeating much of it, this time in the desert.

Gee, that sounds a lot like another faked incident: pitching Saddam’s “weapons of mass destruction” as the basis for invading Iraq.

Also 9-11 led directly to the “Patriot Act” which is by itself another state of exception in that it perpetually renews a greatly empowered surveillance state and much diminished accountability.

1 Like

Crazy thought: a state of exception for the climate.

Speaking of WWII, I do marvel at the level of cooperation on both sides of that conflict actually. Today there is no such unity and there are effectively right now no world leaders who aren’t making things worse. I make some exceptions to several EU leaders who are trying to do the right thing more or less. Contrast that with Trump, Putin, China, India, Brazil…

2 Likes