No problem.
We have had countless discussions, trainings, and disagreements about AI. Let me go through a few points. I do not necessarily agree with all of them personally, but I think it is important to map the discussion inside the community.
The first thing is that most of us would agree that we already use AI in some form. Even Google Translate is a form of AI. So rather than pretending we can avoid it entirely, we need to think about where we can use it ethically and which parts of writing, editing, and translating remain ethical under current conditions.
Because we are such a diverse community in terms of age, culture, gender, media background, writing frequency, and so on, we have to include that diversity in our thinking. So we are constantly discussing it, and we also invite people from outside to share their perspectives.
One example is from our work with Indigenous languages, especially from Central America. As you know, “Mayan language” is really a plural category. These are not dialects that all speakers understand; they are a family of different languages.
We have had discussions with native speakers who are also activists and who want to write about their identity and about being underrepresented and discriminated against, often with histories of cultural or other forms of genocide.
Some of them love AI, and they make a very valid point. They say: our languages are just beginning to be recognised. Some are only now gaining a written form. Some variants are privileged over others, even within the Mayan language family. We want to promote our language, which other people dismiss as useless economically or socially. We want to create engaging content. We have zero money and zero support from government or NGOs. If we do not use AI to produce content, we will not be relevant. We need AI because we have no economic means, and we want to show that this language is alive.
That is a very interesting perspective, and I fully respect it.
On the other hand, there is the darker side. I think this happened in Canada. There were anglophone publishers producing books in Indigenous languages using AI translation, and when the books were published, native speakers said the translations made no sense at all. So you have two extreme poles of the same discussion: it can go very well, or it can be a complete disaster.
As editors, we have more or less agreed on one thing: there will never be a final answer, and we need to revisit this conversation every month or at least every quarter, because the technology changes so fast. We need to keep up not only with the tools themselves, but with the discussions, the new problems, and the new solutions.
Personally, I am very suspicious of AI on two levels. First, on the level of writing articles. I absolutely reject the idea of AI writing a piece that someone then signs and submits as their own. I would denounce that in any professional environment I am part of.
The other issue is more controversial within the community. Some people argue that English is not their native language, and since they have decided to write in English, they will use AI as a kind of conversation partner to improve their thinking on the topic and even their writing. For me, that is not even a grey area. It is a no.
We have human editors. That is a privilege. We have the budget for that. So I would refuse that use of AI. Of course, how can you prove it? You really cannot. At some point you have to trust people. And sometimes it is obvious from the writing when something suddenly no longer sounds like the person.
Where I draw the line is this: the structure of the story, the main ideas, and especially the first draft should come from the human writer. You should not worry about language quality. That is why we have editors, including native speakers of English if the story is first published in English. The acceptable uses are things like spell check, which already uses AI, and maybe playing with AI for possible title suggestions. Even there I am not fully enthusiastic, but I can tolerate it as a game.
What I do not accept is when someone says they are having a conversation with ChatGPT to help develop the story. To me, that is already dangerous, because AI is deeply biased on language, gender, representation, and so on. It introduces ideological discourse right at the core of writing the story, and I am very opposed to that.