Can tech design for survivors? How sex, violence, and power are encoded into the design and implementation of data/AI-driven sexual misconduct reporting systems

I’m a PhD researcher at Oxford Internet Institute, studying the intersection of gender-based violence and emerging technologies. I’ve been working on gender-based violence response for a little over a decade now, and have been focussing on issues of trust and gender, sexual politics, and how the role of emerging technologies in that. I want to facilitate meaningful connections between stakeholders, anticipate emergent harms, and ensure these conversations center on survivors and practitioners.

My personal trajectory into this research topic goes way back. I owe much of my gender consciousness to growing up in a bilingual and bicultural household as a Korean American immigrant. My family moved to the US from South Korea when I was 11-years-old, and it exposed me to two different systems of gender. Through that, I was able to develop my own kind of consciousness about how both gender and culture are not fixed.

In college, I became involved with the campus sexual assault/Title IX movement, organizing it on my own campus. And I spent about six months essentially doing a qualitative research by talking to different student groups to figure out what happens when a student experiences sexual assault and the extent to which the university does or doesn’t take responsibility and provide resources for that. This coincided with a national movement which led to the Obama administration’s policy reform in 2014.

Much of the national organizing relied heavily on social media, specifically Facebook groups to build coalition and share best practices. This happened to coincide with growing cultural awareness of social media as a potential tool for movement and community building. The downside of that that none of us really anticipated going in was cyberharassment. It was in the early days of Gamergate. And that is what made me interested in the role of technology: its double-edged role in facilitating connections but also targeted harassment. Fast forward to five years, and I’m doing a research project on the role of technology in sexual misconduct reporting.

And that has been my path ever since, looking at the ways in which we can be imaginative in thinking about bodily autonomy, care, and intimacy; how to translate that to political solidarity; and how that can be operationalized into actionable policy to hold institutions accountable; and the role of new technologies in that process.

Lessons from anti-rape organizing and cyberharassment

When I started organizing against campus sexual violence, it required me to spend a lot of time online to research, draft guidelines, connect with organizers in other campuses and organizations, etc. Almost immediately, I realized there were many communities of misogynist men who would actively track the discourse on sexual violence, and then directlytarget young activists who became national figureheads. They’d constantly barrage us, saying we were liars, or that we didn’t know what we were talking about.

It was deeply unsettling. I distinctly remember trying to report some cyberharassment I received. It felt different from other forms of harassment I had experienced before… I would open my phone or my laptop, and the red notification button would just glare at me. Because it was anonymous and online, it felt like I was constantly being watched. I felt completely at a loss as to how to even respond to it. I decided to report to the campus police and I remember being told it was my fault for having online accounts online, “Oh, you wrote something online? Well, we can’t really do anything.”

I knew that this was the common response to cyberharassment, let alone non-tech facilitated harassment. Victims are always told that they should watch out, they should be more vigilant and judicious. I knew nothing would be done so I dropped my report. I know that there are some resources available today, but they remain individualized defensive strategies. We need to listen to the victims and activists of cyberharassment and tech-facilitated abuse to fundamentally change how we conceptualize these issues. It’s built into the design of the Internet we have today, and until we change the platforms, it’s only gonna get worse.

It’s been about six or seven years since we’ve first started organizing on different campuses across the US. We formed a network that’s now become a nonprofit organization called Know Your IX. It started with a handful of us and it’s been incredible to see this network of college students grow into a leading organization of young people combatting gender-based violence. Some are lawyers now, some are academics, and some are no longer in the space. Burning out is a real deterrent in this space, unfortunately. We lose our most vocal and critical people because we don’t have the structure in place for them to reflect and grow. And all this while battling PTSD and other mental health challenges, and as the cyberharassment keeps clogging your inbox.

We know that this kind of environment has a chilling effect on who decides to speak where. Research shows again and again that women, especially women of color, tend to self-censor. But I think the issue extends beyond cyberharassment and hate speech. Youth organizing increasingly requires public visibility. We are asking young people to put themselves out on the Internet to face a barrage of hateful speech in order to commodify themselves in the name of social justice. I know, for myself, I found the constant expectation of visibility extremely constraining and harmful. I made a conscious decision to shift to research because I find that level of scholarly distance to be a more meaningful way to converse with an audience I can challenge and trust. I realize that’s an extremely privileged option, and I really hope that we move beyond the visibility-oriented model of organizing. It asks too much of our organizers with little or no safeguarding in place.

Can tech design for survivors?

My doctoral research takes an ethnographically-informed approach to interrogate how assumptions about sex, violence, and power are encoded into the design and implementation of data/AI-driven sexual misconduct reporting systems.

In the case of campus sexual assault in the US, there has been an influx of digital systems designed to facilitate disclosures, collect evidence, and automate reporting. These systems are attractive to institutions, because they seem to be an easy fix to a difficult problem: having an internal grievance procedure, making sure it’s a fair process, etc. This is especially true right now in a politically charged environment with #metoo and the Trump administration. To victims, who have very reasonable considerations to distrust their institutions — maybe because of their prior experiences of not being heard, or because they feel isolated, or because they fear retaliation — these technologies are perceived to be objective and neutral third parties.

Unfortunately, there are flaws in how these systems are designed and how they’re applied. They are designed by humans. This means biases are encoded into the design of the systems, especially about what sexual harassment or violence looks like, or what a real victim is. And when they’re applied within a politically charged and socially laden environment, they of course have discriminatory, exclusionary and unjust implications.

I’ve conducted three years of ethnographically informed interviews with system designers, web users, and with institutional adopters — both frontend and backend users. The recurring narrative is that because this is data- and automation-driven, it is objective, it’s neutral, it’s secure. It very much feeds into the collective imagination of technologies as non-human and therefore, trustworthy. Interestingly, there’s actually some behavioral science suggesting that because of this perception, victims are more likely to disclose earlier on and seek help earlier on, when they’re talking to a digital interface rather than a human.

Part of it stems from the erosion of trust in traditional institutions and law enforcement, especially when it comes to sexual violence. Especially in the #metoo era, a lot of people, especially women, especially women of color, have very legitimate reasons to doubt that their institutions have their backs. And that’s especially a serious issue when it comes to sexual violence, because the two most important factors in a survivors decision to report are:

  • you need to be able to name what you experienced as a wrong. A lot of survivors don’t, because they think their experience wasn’t ‘real’ or ‘bad’ enough.
  • you need to believe that you will be believed when you do decide to disclose. Many, especially those from communities with historically charged relationship with law enforcement, don’t.

People harbor serious distrust in their institutions and for good reasons. Tech positions itself as the solution to this problem of institutional distrust. Big Tech has successfully presented itself as the reliable and ideal alternative to the failures of traditional institutions. Because many people harbor reasonable distrust towards traditional institutions and don’t necessarily understand tech systems, Big Tech can be really appealing. I’ve really struggled with this aspect, which comes up in my fieldwork over and over again. When you have been dismissed, interrogated, and vilified by traditional institutions of law enforcement all your life, why would you want to turn to them in your moment of need? Without alleviating these institutions from their history of inequality and injustice, what does meaningfully distinguish them from Big Tech is that they are, by design, accountable to public interest. The same can’t be said for Big Tech.

So, then, how do these systems fail to work? Consider how these reporting tools categorize sexual violence. In general, the reporting interface will ask few descriptive questions about who, when, where and an open response form asking ‘describe what happened.’ Around this point, forms generally provide the option for the user to select a category of sexual misconduct that applies to them. It may appear as a drop-down menu or checkboxes. Some may enable you to select more than one option, say, ‘racial profiling’ and ‘sexual harassment.’ Some may only allow for one category. Some may not even have a category that applies to you. This is often the case with experiences of non-physical/sexual violence, like stalking and partner abuse. This means that, if your experience of sexual harassment is different from system designer’s understanding of sexual harassment, you may not even be able to input your experience. This bias actively leaves out certain people’s experiences, because its designers are working with a very particular narrative of how they understand what violence is.

So, then, what actually could be a useful use of tech? There have been a few interesting efforts of trying to create apps or automated system, for people to collect evidence pertaining to tech facilitated harassment in a way that that would be legally admissible. But for those the victim would still have to go through three years of chat history with their abuser. Instead, let’s figure out a method to automate that process. Not only is it a traumatizing experience, but also a lot of victims don’t have the data literacy to be able to doso. When I was working at a domestic violence clinic in Los Angeles, for example, most of our clients were single, undocumented mothers who didn’t speak English, with very low tech literacy. But they would bring screen grabs, they would have to sit through and waded through years of stuff, and they get extremely overwhelmed by tech, and for good reasons. Of course, that’s something that tech could actually make easier.

A case against reporting

Part of the problem with the design and implementation of these reporting systems is that there is such a fixation on reporting. Data on sexual violence tends to privilege and reinforce reporting as the ideal outcome for survivors. The methodological reason is that data on sexual violence is very difficult to collect, as it relies on self-disclosure and poses ethical challenges, so researchers and policymakers look to reporting data. The social reason is that we see reporting as an active outcome and dismiss others as less legitimate.

But help-seeking is an extremely iterative and continuing process. It takes victims usually six months to 11 months to even disclose it to anybody, before formally reporting it. And formal reporting is also not necessarily the best outcome for a lot of people: you might share kids with your assailant, you might be worried about their immigration status, or you might just want them to face consequences that don’t involve them going away to jail. Thinking about help-seeking and thinking about recovery is often done in a very narrow way: “have you reported that?” “go to the police” and that’s when you’ll get better.

This attitude gets encoded into the design of reporting systems. Constant notifications and nudges are baked into the flow of the interface to incentivize the user to consider formal reporting. The system vendors seem to think that automated and data-driven tools will point the user to one optimal solution–reporting.

But this is incredibly misguided. Survivors are not a monolithic group: their experiences of violence, access to resources, recovery process, and ideas of justice are extremely diverse. And this multiplicity should be appreciated, because it gives the society a more robust and relational way of thinking about justice. But the way that reporting tools are designed does not allow for this.

In addition, a lot of these systems are technically for everyone, but they’re usually only adopted by, and used for, privileged and tech savvy English speaking users. For example, workplace harassment system tools are for everybody, which should include anitorial workers, temp workers, contract workers. But they are written out of these systems because they are rarely identified as a potential user or their employment contracts subject them to a different grievance procedure.

Doing more with ‘small data’

In my research, I make a case for de-emphasizing reporting. This posed a methodological challenge in conducting my research. How do I gather data about survivors’ help-seeking experience without relying on reporting data? As I said, we already know that the data we have tends to privilege people who were successful in going through the legal system to report their incidents. This means that victim demographic information and the incidents, assault information, and legal recourse are already extremely partial. The reporting tools I study build their systems based on that data. The data we use reinforce the privileging of reporting as the ideal outcome for survivors. What would it mean for me, then, as a researcher to rely on reporting data?

This is why I turned to qualitative research and 'small data.'Having a structured conversation with all the different people involved in the design and application process provides better data about reporting systems’ impact and implications. It might take me several emails, many referrals, and several weeks of relationship building, but the conversations I have with each participant is so rich and insightful and telling, much more than what I could get if I were to do a survey or something.

De-glammorize tech

Even after discussing how and why tech solutions to sexual misconduct reporting fails us, the question I often get is: how can we improve the tech we have so that their ‘failures’ and ‘shortcomings’ can be overcome? To me, this belief that tech can be improved to perfection to the point of replacing human actors is so fundamentally misguided.

At the end of the day, these reporting tools are essentially case management systems. The vendors market themselves as a revolutionary innovation to ‘combat seuxal violence’ and ‘empower survivors.’ But combating violence and supporting survivors comes from sustained collective action. No new system can or will be able to do that.

Rather than asking how tech can be fixed for the better, the more urgent and important question is: who and what are we overlooking when we turn to tech solutions? Social workers, administrators, advocates and activists, and scholars spend years becoming experts in violence prevention and response so that they can serve survivors better. How much money, time, and energy are not going to them, when we spend those resources on tech? These are the people who work directly with victims and are in the best position to inform policies and practices. We should trust what they have to say, because they have experience and have demonstrated that they have the public’s interest. The same can’t be said for tech solutions.

The shiny new tech can seem like it can offer accuracy and objectivity. But they can’t because they are built by people based on data collected by people. From design to implementation, every point is inculcated with bias. This is not to say that there is no place for data-driven tools. But when we de-glam tech, we can see it for what it is–it’s a tool to help us connect, communicate, and make informed decisions. This is why we need to support practitioners in anti-violence space, like social workers, jurors and judges, and advocates, with data and tech literacy, so that they have control over how they interpret and act on data.

Burning question: How can we support practitioners in anti-violence space, like social workers, jurors and judges, and advocates, with data and tech literacy, so that they have control over how they interpret and act on data?

Post a thoughtful comment below ahead of the workshop on AI & Justice on November 19!

This will help us to prepare good case studies for exploring these topics in detail. It will also ensure participants are at least a bit familiar with one another’s background and experiences around the topics at hand.

11 Likes

Wow, @katejsim, what a post! I had managed to miss it. Very, very interesting work. As far as I can tell, there are two main threads here.

The first is that the Internet’s architecture facilitates collective action. This is true for the good action of the anti-harassment activists, but also for the bad action of cyberharassers:

The second thread is one I am more familiar with: understanding data. Like all data, reporting data are not a a faithful reconstruction of the phenomenon underpinning them (harassment), but just one, very partial, proxy for that phenomenon. And, like all data, they carry the risk of shifting all the attention onto the proxy, and away from the thing itself. We make what we measure, and if we measure reporting, we will end up acting only on what is being reported.

Both these threads, I believe, are relevant to a discussion on a new Internet of humans.

For the first one, I have a curiosity: how did people in the anti-harassment movement, like yourself, respond to the discovery that the Internet was both empowering them and putting them under pressure? Were people calling for more policing of the Internet? Or dreaming to go all crypto, and take solace in anonymity? Or what?

For the second one, what I read in your post is a promising effect and a warning. The promising effect is that, it I understand correctly, people are more likely to report harassment episodes if they do it online. But even here, how can we be sure that the increased likelihood is not a function of the credibility of the follow-up downstream of reporting? After all, you yourself point out that many people are mistrustful of institutions; a highly structured Internet funnel might offer some more guarantees, as caseloads and queues of people processing reports are monitored. If a report lands in your inbox, and is not processed in a reasonably short time, you will surely hear from your manager!

And the warning is, of course, that what you are looking at here is not reporting of harassment, but harassment itself. And plenty of it happens even in the absence of reporting: we ned to make sure we do not lose track of that.

Am I reading your correctly? What are your thoughts on my questions?

3 Likes

@JollyOrc (Social Media is broken, let's do better!) this seems like a post you might be interested in since you also explore how to carefully create safe online environments not just through pure tech solution but by cleverly including also well-equiped moderators. managing communications - an attempted glossary

And @katejsim, what is your take on this approach based on your personal experiences as well as research?:Sex Tech Conference in Berlin

Would also recommend @Leonie to have a look at this article here :slight_smile:

4 Likes

A most fascinating post. I have picked up also from the internet-of-things crowd that there is a problematic amount of bias built into products for the home and public spaces. @pbihr?

1 Like

Wow, this is a super fascinating post. Thank you for sharing this, and for your work in this space.

You probably know these cases, but since some others in this thread seemed interested also in related/relevant/analogous examples from (kind of, at least structurally) adjacent areas including IoT and AI, here are a few that might be interesting in the sense that I hope they complement your argument:

  • At our most recent annual ThingsCon conference (2018), Manon den Dunnen shared her experience of reporting gone bad through unintented consequences when police officers would exchange phone numbers with both victims of crimes as well as suspects, and the Facebook apps would often do its network matching/contact proposal thing and directly propose to connect victim and suspect/perpetrator. It’s quite horrific, and a super important case study about data minimization and network mining (it’s also a short video): https://www.youtube.com/watch?v=y6MRVQG8Vh0
  • A number of studies have shown algorithmic bias at its worst where policing and/or justice related algorithms were trained on training sets that had massively racist data points, including statistics created based on policing behavior that was deemed illegal/racist by courts at the time, but somehow remained part of the data sets. (AI Now Institute has mentioned this multiple times across their annual AI Now reports.)
  • NYC’s policing algorithm (in)famously measures effectiveness by fairly simply metrics like numbers of arrests and severity of the causes of arrests. This has created horrific incentives for police to report highly selectively so as not to ruin their (perceived) performance, including going so far as to structurally intimidate rape victims to change their charges from rape to minor offenses. The podcast Reply All had a mini series on it that’s very accessible.

Those just came to mind and maybe they’re helpful for some of the people reading here. And I couldn’t agree more with your conclusion that tech won’t be the solution to a complex social issue such as this.

Thanks again @katejsim for sharing this.

4 Likes