managing communications - an attempted glossary

Let us talk about how we keep our online spaces useable. Let us talk about moderation, mediation, censorship, and tone policing.

There are probably also other techniques, but for now, talking about these four should suffice. And it is important to talk about them, because all of this is constantly happening online, for various reasons, and sometimes they get mixed, and sometimes one slowly descends into another.

Wherever people communicate with each other, they usually set up a few rules and mechanisms to enforce them. Mostly, these rules are a bit vague, ad hoc and have formed over time as social customs emerge. If people violate these customs and unwritten rules, they get some social feedback to correct themselves, or eventually get ostracized from their communities. So if I’m rude to my friends, they eventually stop inviting me for dinner. This is a gradual, informal process, but effective nonetheless. But what I did say keeps being said.

For a lot of our online communities though, the participants come together from so diverse places and customs that there is a need to formalize things a bit more, and to have designated people in place to enforce these rules. The result is usually a lot less gradual and also more formal and immediate. So instead of being less and less invited to dinners, I suddenly get formally shut out of the community by having my access to the online space revoked. Alternatively, my contributions to the space could get deleted, my voice silenced, even retroactively.

When we discuss these rules and their enforcement, we can break them down into four core concepts:

  • Moderation - here certain topics or media are excluded, with a benign intent. The idea is make a certain online space suitable for a specific audience or topic. An example would be a ban of pornography for a space catering to minors, or the exclusion of car discussions in a forum dedicated to bird watching.

  • Censorship - moderations evil twin. It basically is the same thing, excluding certain topics from a space, but it implies a sinister motive. A political party forbids the opposition to be heard, a group of people is to be shut out and marginalized. Still, it is important to realise that censorship and moderation are often technically the same beast. The main difference is that censorship is often also covert and opaque, whereas good moderation is documented and transparent.

  • Mediation - this includes a vast variety of techniques, but it boils down to the effort of making a friendly and productive communication possible. Mediation doesn’t prohibit a topic to be discussed, but tries to prevent an argument to devolve into a shouting match. This can be achieved by translating sentiments or ideas, by encouraging participants to use or avoid specific rhetorics, and so on. As with moderation, mediation has a benign intention, it is a tool to enable communication. It should be transparent insofar as that everyone knows that mediation is happening, and with which intent.

  • Tone policing is mediation taken too far or being misused. It is especially a thing used by people in power to shut down those who aren’t. Valid arguments or complaints get thrown out unless the person making them submits to sometimes arbitrary rules.Mediation and tone policing employ a very similar toolset, but as with moderation and censorship, the malicious or sinister motive is what makes it a bad thing. To be clear: Tone policing is a tool of oppression.

The problem with both of these two pairings is the huge gray area between them. Just two very simple examples: One can try to gently mediate and end up excluding a group of marginalized people through tone policing. Or one can try to keep pornography away from kids but in the same run also shut off any access to sex education.

Whenever we find ourselves in charge of designing policies on how any given social platform or online community should work, we need to think about these problems. At best alongside a friendly conversation with everyone who could be affected by these policies, especially those who are often marginalized or overlooked. There aren’t any perfect solutions, but a lot of catastrophically wrong ones.


Good analysis. My basic approach is: don’t rip anyone off, don’t harass others, don’t get us into legal trouble and don’t be a jerk. And yes you do know what that means. And then…let er rip.

1 Like

Has anyone thought about consensual tone deafness?

To explain what I mean, here’s a story from the OpenVillage Festival (#workspaces:lote6). A German guy (I think it was Henry, @emsone, from Wir bauen Zukunft) said something that fell foul of an American woman present there. She told him off, very publicly, as a white male neoliberal or something. We looked at each other in disbelief, for we had heard no such thing. The situation was solved by @nadia (bilingual, a native speaker of English and Swedish of very international background, and deft at navigating cultural contexts), who explained that the woman was trying to read Henry as she would read an American of his age and skin tone, and was picking up a subtext that simply was not there. She pointed out to the woman that he was doing all the work to communicate with her in her native language, and she, instead of being grateful (or learning German to improve their communication), was punishing him for not knowing the English language and the contemporary anglo-American culture like a native. She then asked her to leave.

Tone policing is often problematic, but even with mediation, it’s not clear to me who is qualified to do it in a context like Edgeryders. Few people have Nadia’s ability to cross cultural borders. And people like Henry, or me – we speak good English, even great English, but we cannot guarantee the fine command of subtext that we would have in our native languages. Native English speakers might get many false positives, and think we are being inconsiderate or even violent, while we are simply out of our linguistic and cultural comfort zone.

Hence, consensual tone deafness. A rule that says: to accommodate people who are operating outside their native language (in Edgeryders, that’s the majority, whichever language you choose), we don’t engage with anything that has not been said explicitly. If you are in doubt, ask: do you think what I wrote was dumb? Are you trying to make the point that [insert project/organization name] is a puppet of some vast neoliberal movement? Stuff like that.

@johncoate’s strategy of forbidding only the most obviously wrong stuff is in the same direction, I think.

This is where “oversupply understanding” comes in.

Dear Alberto!

I cannot recall a situation like the one you described. Might have been someone else.

However, I’ve experienced situations where my words were misinterpreted in a way I saw myself confronted with an accusation of being inappropriate (or even worse, abusive). And it happens irrespective of me speaking English or in my mother tongue. In most cases this can be resolved by further communication. It depends if both parties are willing to resolve the issue and if there’s an adequate amount of trust. And if there’s a mediation angel like Nadia around chances are even better. Still I don’t think this sort of situations can be prevented by a specific ruleset. Until we are all illuminated we’re going to react here and there to emotions triggered by some words or actions of others. It’s part of the process :wink:

So I’d add up to John’s list: don’t take yourself too seriously.

Hope you guys are doing good! Drop me a message, when you get to Berlin. Would love to catch up.

Much love



I’m pretty sure it was you… maybe you are so enlightened that you ignored, and immediately forgot, the woman’s criticism. :slight_smile:

Do you remember the day and venue/room and the topic of the discussion? I’m keen to bring it back to my memory.

Henry Farkas

I think it might have been the harvesting session at the end of day 1 @@, which was 2017-10-19. @nadia do you remember?

It was in some kind of plenary session. It was fleeting, lasted less than a minute.

I should also add that there seem to always be some around who want to correct, indeed tone police, their peers in conversations independent of any moderator action and the conversation can sort of derail into a drawn out debate on who is right, whether offense was given or taken, theories about intent and other aspects. I see this as not to be avoided but it can get so involved that the original discussion gets lost. It’s part of how a community arrives at its standards. Moderation in such cases requires not some Solomon-like judgement that cleanly deals with it all, but rather a kind is ongoing summing up and often a nudge over to where it can be discussed on its own. I say nudge because too strictly making everyone stick strictly to the topic in the header can be stifling in its own right.

The more I think about this subject the more convinced I am that what is really needed is much more design and building of the user-controlled experience.

I have done a lot of moderating in my life and managed systems that use moderation extensively. But it increasingly concerns me that ad-based business models for social media networks are in so much conflict with the individual controlling their own experience in that network that they choose instead to do moderating that is essentially a rear guard action form of censorship. Or if not censorship let us call it a continuation of the networks control of the users experience because they cannot relinquish that control without breaking their business model.

I certainly understand and sympathize with a statement like “let’s make all users safe.“ But who decides what safe even means? And from the user perspective, I do not need a censor or even someone remotely resembling one. And in Facebook‘s case, I do not trust them at all about anything. Why should I? But I think we all agree on that point. Not that I don’t use it, I do. And I think I know the trade-offs. However I will never find out what it is they won’t show me and I don’t like that.

So again what I want is control over my own environment. If I don’t want to see something I want to be the one to decide. If I don’t like someone’s tone I’ll say so. Now I do understand the problem of hate speech I think and it is a thorny problem. But again who decides what I, an adult can and cannot see?

Hate to speech is a definite problem but where does one draw the line? One idea I take from when I managed a public radio station that is non-commercial, is the idea of the “call to action.” In such a case one might say all kinds of derogatory things about others and it would be offensive but if they do not make a call to action then perhaps it is best dealt with by the people in the conversation who can either hopefully ignore the person or filter them out or just let them know that such statements say more about them then the people they are criticizing.

Regarding kids, I managed an online community that was almost exclusively populated with teenagers and we had to observe child protection laws very carefully. I support those laws and there are ways to keep those environments pretty safe, but it is expensive to manage because you need a lot of on the ground moderation plus sophisticated software for identifying problems and you need parents who take responsibility for their children’s well-being and do not delegate that to surrogates.

There is the problem of scaling up to a very large size of user base. I admit that the systems I have managed never got that huge, not more than several thousand people. At those sizes more hands-on moderation from the company can be pretty effective. But when the site/platform scales up really large, how do you keep moderating decisions from being increasingly arbitrary? This to me lends more strength to the argument that more focus should be placed on giving the user more control rather than devising more sophisticated ways of company management of the user experience.