I decided to share a piece I edited and co-authored lately for the European Science and Media Hub (link once it’s out) - it’s quite an entry level, I’m well aware of it But maybe some of the things we point out can be an interesting starting point for your debate. I also plan to write a short summary of what we’ve discussed about AI in Journalism in the European Parliament last week.
The other authors of this piece are Andrius Balciunas, Andrea Kocsis, Anna Udre and Borko Brunovic.
One of the fields, where AI technologies can be used in newsrooms, is tackling misinformation. However, fact-checking tools still require human supervision and they seem to carry the risk of biases, mistakes and the possibility of human misuse. In order to overcome misinformation, both journalists and teachers have a responsibility to educate the public on how to spot false narratives and stop them from spreading.
Are we alone in the fight against misinformation and fake news? Are there tools out there that can help journalists or should we be rather scared of artificial intelligence in the media? Although it seems robots will not conquer our newsrooms in the near future and we will not have an algorithmic army fighting online trolls, it does not mean AI tools are risk-free options. We checked the pros and cons of automated fact checkers. Do they work? Even if they do, can be entirely sure that journalism has the largest responsibility in the war on misinformation?
Artificial Intelligence (AI) - flies airplanes, drives cars, writes news and forecasts the weather. Decides on life and death. Most inventions throughout history provoked controversy - but not many of them had been as widely debated as artificial intelligence. There is still no clear definition of what AI is and what does it do, but its existence opens up many questions. Legal, technical and above all moral and ethical. What are its limits? Achievements? And - perhaps most importantly - who, and how, controls it?
Manipulations that AI allow have potentially devastating effects on society: fake news, fake reviews, fake videos evoking fake emotions. At the same time, it is a tool to create better societies, a brighter future, and meaningful jobs. And in the context of journalism, AI can be an ally of both the reader and the author - we invite you to learn about the available tools and their applications.
AI against misinformation
Journalists do not have to fight the battle against fake news alone. There are AI-based tools which can be used in the newsrooms against misinformation. According to Mattia Peretti, the Project Manager at “Journalism AI” at LSA, the most frequently used are fact-checkers.
Fact checking tools use rich databases of verified, high-quality and wide-ranging information. The algorithm is also fed with examples of debunked stories and results of the work of human fact-checkers. According to Lucas Graves from Reuters Institute Oxford the development focuses on three aims: spotting false or questionable claims circulating in the media; verifying or facilitating verification of claims or stories and delivering corrections instantaneously, across different media, to audiences exposed to misinformation.
Lucas Graves: Understanding the Promise and Limits of Automated Fact-Checking. Reuters Institute Oxford. Factsheet 2018.
At the moment, these fact checkers are half-automated. The databases used in these algorithms have displayed multiple examples of stereotypes and biases, and they are not capable of the judgment and sensitivity to the context, necessary to reliably establish the veracity of the material. " I’m a bit skeptical about AI and fact-checking. Sometimes misinformation is not fake, it’s a tone, something very subtle” admitted Guido Romeo, Data journalist at FACTA, “ I have never seen a machine to grasp it"
Similar results came out of Brooke Borel’s (the author of The Chicago Guide to Fact-Checking) research, during which she used ClaimBuster against her own manual work. Although ClaimBuster found almost the same amount of fake information and did it faster than her, it missed some important information which was only partially true and could be defined as misinformation. She cited the example of a sentence about climate change, a topic both political and scientific, which contained a subtle judgment: minimizing the impact of human activity on global warming. The ClaimBuster was not able to point out the doubtful tone of the article.
However, AI has also promising results in newsroom practice. “ The New York Times uses “Perspective” a Google tool, one of the best for comment moderation” - mentioned Stefan Hall, World Economic Forum. He also highlighted the benefits of the MTI-based Portico to reduce toxicity in an online environment, as well as Twitter’s own strategy to try and address disinformation, particularly bots and trolls. “ I think the Google’s “Perspective” is probably the most likely to be available in other countries.” - pointed out Hall. - “They have a human editor involved so there is always someone who makes the ultimate decision. However, many companies are looking into that.”
Wilfried Runde stressed the importance of the journalist’s judgment in the process - “ the result will always be left with the journalist in the newsroom.” This applies to Truly Media, a tool developed three years ago by Deutsche Welle and the Athens Technology Center in Greece. The technology is based on open-source engines, such as TinEye, which reverse-searches images. It is used for example by Amnesty International in their investigations, or in the European Parliament to verify scientific material. Its main purpose is to verify the authenticity of user-created content - but the final decision lies with the person who interprets these findings.
Mattia Peretti also mentioned a UK-based fact-checking charity Full Fact during European Youth Science and Media Day. It monitors major newspapers and broadcast news, as well as parliamentary sources, using the available subtitles and also text to speech conversion.
Lucas Graves from the Reuters Institute Oxford described two other available automated fact-checking projects, the Duke Reporters Lab and Chequeado. The Duke Reporters Lab is a hub at Duke University. They have developed Tech & Check Alerts, which helps journalists spot questionable claims in the local news - and sends a list of them in a daily newsletter. Their other new project, FactStream, offers live fact-checking of major political events via a mobile app. Its first public test came during the 2018 State of the Union address, when during the speech reportedly more than 3,000 people used the app.
Finally, Chequeado is a fact-checking nonprofit based in Buenos Aires. In its current version, the program monitors presidential speeches and about 30 media outlets across Argentina to find claims to check. Another planned feature matches statements against previous fact-checks, and against official statistics, in order to automatically generate a draft for a human fact-checker to review. The platform will be shared with other fact-checking organizations in South America, and with news organizations interested in political fact-checking, but at the moment it is available only in Spanish.
There are many other AI-based fact-checkers available, such as ClaimBuster, the veracity.ai or the Factmata. The technology is constantly improving, but will it ever become a reliable solution against the misinformation and the fake news?
So, can we overcome misinformation, or is it here to stay permanently? People working with hybrid threats and information warfare stress the role of education and raising awareness in overcoming this threat.
After Russia illegally annexed Crimea and started a war with Ukraine in 2014, Ukrainian media professionals spotted how the society was manipulated with misinformation for many years. Their first response was to start debunking the myths, but the long-term strategy required educating young people. Media professionals started a project where school children could learn to discern non-objective information.
Though, the responsibility lies not only on journalists. “ People always assume that science journalists are experts in fake news and know how to change them. But that’s not true,” said Mićo Tatalović, “Nature” news editor and Chair of the Board of the Association of British Science Writers. “I don’t believe in the idea of media manipulating people. We are here to report, not to change peoples mind,” he responded about the ways science journalism can fight fake news. “ There is already too much manipulation outside. It should be the role of education and teachers. They are the expertise you need, not journalists.” Therefore, the knowledge must be passed to teachers as well.
This applies to the use of AI in fighting misinformation. As Wilfried Runde of Deutsche Welle said, “ publishers must be open about the AI tools that they are using in their work”. If a media outlet is using AI for editing, data analysis, preparing material, etc., they must be transparent about their tools and encourage the public to participate in giving feedback. This would benefit both the media and the society - by showing how technology helps us create reliable content, spot the misinformation, and reestablish trust, crucial to democratic societies. But also explaining why AI cannot replace human judgment or ethics, or the need for personal contact in acquiring sources and first-hand information.
Therefore, it seems that humans are not entirely alone in the fake news war. Technology has already developed useful tools for facilitating the journalists’ work, such as fact-checkers, and the industry does not stop progressing. However, these options still require human supervision and they seem to carry the risk of biases, mistakes and the possibility of human misuse. So journalists are here to stay. But they must stay alter and show how our values of democracy, transparency, and freedom of speech are being preserved in this super dynamic, technologically enhanced environment.