Get your facts straight with AI

I decided to share a piece I edited and co-authored lately for the European Science and Media Hub (link once it’s out) - it’s quite an entry level, I’m well aware of it :slight_smile: But maybe some of the things we point out can be an interesting starting point for your debate. I also plan to write a short summary of what we’ve discussed about AI in Journalism in the European Parliament last week.

The other authors of this piece are Andrius Balciunas, Andrea Kocsis, Anna Udre and Borko Brunovic.

One of the fields, where AI technologies can be used in newsrooms, is tackling misinformation. However, fact-checking tools still require human supervision and they seem to carry the risk of biases, mistakes and the possibility of human misuse. In order to overcome misinformation, both journalists and teachers have a responsibility to educate the public on how to spot false narratives and stop them from spreading.


Are we alone in the fight against misinformation and fake news? Are there tools out there that can help journalists or should we be rather scared of artificial intelligence in the media? Although it seems robots will not conquer our newsrooms in the near future and we will not have an algorithmic army fighting online trolls, it does not mean AI tools are risk-free options. We checked the pros and cons of automated fact checkers. Do they work? Even if they do, can be entirely sure that journalism has the largest responsibility in the war on misinformation?

Artificial Intelligence (AI) - flies airplanes, drives cars, writes news and forecasts the weather. Decides on life and death. Most inventions throughout history provoked controversy - but not many of them had been as widely debated as artificial intelligence. There is still no clear definition of what AI is and what does it do, but its existence opens up many questions. Legal, technical and above all moral and ethical. What are its limits? Achievements? And - perhaps most importantly - who, and how, controls it?

Manipulations that AI allow have potentially devastating effects on society: fake news, fake reviews, fake videos evoking fake emotions. At the same time, it is a tool to create better societies, a brighter future, and meaningful jobs. And in the context of journalism, AI can be an ally of both the reader and the author - we invite you to learn about the available tools and their applications.

AI against misinformation

Journalists do not have to fight the battle against fake news alone. There are AI-based tools which can be used in the newsrooms against misinformation. According to Mattia Peretti, the Project Manager at “Journalism AI” at LSA, the most frequently used are fact-checkers.

Fact checking tools use rich databases of verified, high-quality and wide-ranging information. The algorithm is also fed with examples of debunked stories and results of the work of human fact-checkers. According to Lucas Graves from Reuters Institute Oxford the development focuses on three aims: spotting false or questionable claims circulating in the media; verifying or facilitating verification of claims or stories and delivering corrections instantaneously, across different media, to audiences exposed to misinformation.

Lucas Graves: Understanding the Promise and Limits of Automated Fact-Checking. Reuters Institute Oxford. Factsheet 2018.

At the moment, these fact checkers are half-automated. The databases used in these algorithms have displayed multiple examples of stereotypes and biases, and they are not capable of the judgment and sensitivity to the context, necessary to reliably establish the veracity of the material. " I’m a bit skeptical about AI and fact-checking. Sometimes misinformation is not fake, it’s a tone, something very subtle” admitted Guido Romeo, Data journalist at FACTA, “ I have never seen a machine to grasp it"

Similar results came out of Brooke Borel’s (the author of The Chicago Guide to Fact-Checking) research, during which she used ClaimBuster against her own manual work. Although ClaimBuster found almost the same amount of fake information and did it faster than her, it missed some important information which was only partially true and could be defined as misinformation. She cited the example of a sentence about climate change, a topic both political and scientific, which contained a subtle judgment: minimizing the impact of human activity on global warming. The ClaimBuster was not able to point out the doubtful tone of the article.

However, AI has also promising results in newsroom practice. “ The New York Times uses “Perspective” a Google tool, one of the best for comment moderation” - mentioned Stefan Hall, World Economic Forum. He also highlighted the benefits of the MTI-based Portico to reduce toxicity in an online environment, as well as Twitter’s own strategy to try and address disinformation, particularly bots and trolls. “ I think the Google’s “Perspective” is probably the most likely to be available in other countries.” - pointed out Hall. - “They have a human editor involved so there is always someone who makes the ultimate decision. However, many companies are looking into that.”

Wilfried Runde stressed the importance of the journalist’s judgment in the process - “ the result will always be left with the journalist in the newsroom.” This applies to Truly Media, a tool developed three years ago by Deutsche Welle and the Athens Technology Center in Greece. The technology is based on open-source engines, such as TinEye, which reverse-searches images. It is used for example by Amnesty International in their investigations, or in the European Parliament to verify scientific material. Its main purpose is to verify the authenticity of user-created content - but the final decision lies with the person who interprets these findings.

Mattia Peretti also mentioned a UK-based fact-checking charity Full Fact during European Youth Science and Media Day. It monitors major newspapers and broadcast news, as well as parliamentary sources, using the available subtitles and also text to speech conversion.

Lucas Graves from the Reuters Institute Oxford described two other available automated fact-checking projects, the Duke Reporters Lab and Chequeado. The Duke Reporters Lab is a hub at Duke University. They have developed Tech & Check Alerts, which helps journalists spot questionable claims in the local news - and sends a list of them in a daily newsletter. Their other new project, FactStream, offers live fact-checking of major political events via a mobile app. Its first public test came during the 2018 State of the Union address, when during the speech reportedly more than 3,000 people used the app.

Finally, Chequeado is a fact-checking nonprofit based in Buenos Aires. In its current version, the program monitors presidential speeches and about 30 media outlets across Argentina to find claims to check. Another planned feature matches statements against previous fact-checks, and against official statistics, in order to automatically generate a draft for a human fact-checker to review. The platform will be shared with other fact-checking organizations in South America, and with news organizations interested in political fact-checking, but at the moment it is available only in Spanish.

There are many other AI-based fact-checkers available, such as ClaimBuster, the or the Factmata. The technology is constantly improving, but will it ever become a reliable solution against the misinformation and the fake news?

Solutions? Education

So, can we overcome misinformation, or is it here to stay permanently? People working with hybrid threats and information warfare stress the role of education and raising awareness in overcoming this threat.

After Russia illegally annexed Crimea and started a war with Ukraine in 2014, Ukrainian media professionals spotted how the society was manipulated with misinformation for many years. Their first response was to start debunking the myths, but the long-term strategy required educating young people. Media professionals started a project where school children could learn to discern non-objective information.

Though, the responsibility lies not only on journalists. “ People always assume that science journalists are experts in fake news and know how to change them. But that’s not true,” said Mićo Tatalović, “Nature” news editor and Chair of the Board of the Association of British Science Writers. “I don’t believe in the idea of media manipulating people. We are here to report, not to change peoples mind,” he responded about the ways science journalism can fight fake news. “ There is already too much manipulation outside. It should be the role of education and teachers. They are the expertise you need, not journalists.” Therefore, the knowledge must be passed to teachers as well.

This applies to the use of AI in fighting misinformation. As Wilfried Runde of Deutsche Welle said, “ publishers must be open about the AI tools that they are using in their work”. If a media outlet is using AI for editing, data analysis, preparing material, etc., they must be transparent about their tools and encourage the public to participate in giving feedback. This would benefit both the media and the society - by showing how technology helps us create reliable content, spot the misinformation, and reestablish trust, crucial to democratic societies. But also explaining why AI cannot replace human judgment or ethics, or the need for personal contact in acquiring sources and first-hand information.

Therefore, it seems that humans are not entirely alone in the fake news war. Technology has already developed useful tools for facilitating the journalists’ work, such as fact-checkers, and the industry does not stop progressing. However, these options still require human supervision and they seem to carry the risk of biases, mistakes and the possibility of human misuse. So journalists are here to stay. But they must stay alter and show how our values of democracy, transparency, and freedom of speech are being preserved in this super dynamic, technologically enhanced environment.

1 Like

Not sure… I have somewhat of a tooth with journalists that repel the responsibilities of their role… It has been investigative journalism ( Investigative Journalism ) that has brought everyone to think of the activity as a 4th estate, and which justified a number of protections for the category, including the excess of creating protected professions out of it (e.g. in Italy) as if one needed to be professionalized to exercise the role…

…now the category seems very keen to rush for clickbait strategies, first reporting rather than deep reporting, and overall journalism as a form of marketing, which is a paid ally of this or that power/mission…

No doubt education is part of the solution, and I have never heard anyone claiming otherwise… but journalists should live up to their roles, or be ready for expiration :angel:

Scientific bloggers (who are not living in the walled garden of professions) are already competing on quality of analysis, and often doing a much better job than “journalists”… a few of them transition with success to more traditional editorial endeavours, like book publishing… so it is achievable with a bit of effort :slight_smile: and people refusing to analyse their own profession are a bit like those men replying the accounts of feminists are exaggerated, because they have never directly witnessed this or that…


I agree with most of what you point out - and the way this particular profession and business evolve creates a whole set of new crazy problems.

I think what’s of key benefit and importance here is the possibility to use some of the new tools to help support one’s arguments - especially as some of the misinformation gets incredibly sophisticated, and things have to be dealt with at incredible speed. One of my favorite artistic groups - Forensic Architecture - uses a somewhat similar combination of multidisciplinary expertise and sophisticated tools to produce counterarguments against those in Power - may it be EU, or Israel, or the palm oil industry.

I don’t think these tools justify laziness or solve the problem of good journalism in times when the industry works based on broken logic. But I anyway think we must be open to technologies that help do this job well.

1 Like

If I understand you right, you are proposing that new tools to produce narratives, just like new artistic forms, can contribute to awaken journalism, and public alike, from the spell they fallen prey to…
That certainly I would agree with… they can be a force to be reckoned with… :+1:

Even art, though, has often become a tool of propaganda, and technology is just an accelerator to make certain things quicker and allow us to make more, sometimes qualitatively more allowing us to invent new forms… but still technology is just that, an accelerator trapped in the social context it is born from and used into… Tricking automatic fact checking is extremely easy, just like counterfactual attacks with generative approaches can trick the best performing classifiers…
Curious humans establish causality chains, which ML don’t touch upon, by design (here I would reference the job of Judea Pearl, and Spyros Makridakis, for example)… and humans can design social discourses for fallibility, something which relying on fast machine-driven “fact checking” seems to de-incentivize…

So I guess I am completely onboard with you about the provocation factor, and the value of working on that family of tools and approaches :slight_smile: … but I am somewhat cold (and I might have misunderstood here, and you are cold about it too, I apologise in advance I don’t mean to label an opinion on you I am just venting my own understanding of this interaction :sweat_smile: ) concerning the idea of championing that family of tools and approaches as part of the solution… to me they are provoking agents, just that :angel: .

We still have to push for journalism to have again skin in the game, and devote time and resources to value creation in the form of investigation, and we have to rethink a lot of our interactions to accomodate uncertainty and healthy critical skepticism… :slight_smile:

1 Like

" when we are drowning in information, the artist’s work is to make things visible"


When it becomes possible to automate certain tasks or give them to AI it is increasingly important to identify which can not be handed over, but also how to properly finance those people who do the investigative and creative tasks we do not want be done by algorithms.

In board meetings it is probably a lot easier to convince the investors to get a new software which is often a one time investment, than to increase staff and their payment to make sure enough well educated people are able to do the work.

So, how could those important human contributions, for example in journalism but in future probably also in education or art, be better evaluated, financed and argued for in the face of AI’s that might be able to take care of some, but not all of those tasks and never to an absolute extend, but which will inadvertently be advertised as such.

How can we balance the argumentative and regulative structure to use the new tools without ever overusing and dismissing the necessary human factor and by doing so loosing the necessary expertise and balance?

1 Like

You raise an important point… the way boards think about investments…
…It is common use to account for the purchase of a machine by modelling the amortisation of the investment over the next few cycles of activity, and if the operation is well done they can later account for residual values… in practice, this is an exercise of projections: they expect to absorb the costs of purchase undercutting the margins of a number of operational cycles, and then expect the future cycles to be purely margin producing, as far as that component is concerned…

On the other hand, personnel is a “liability” whose costs are prolonged life-long, unless said personnel can be demoted to “venial” functions (as it happens e.g. with AI adoption, when the human task is limited to data curation) and consequently kept outside of the structure and only paid per task execution, or fraction of cycle…

Now the truth, rather well known to most investors and board members, is that the above amortisation projections are fictional, since they rarely account realistically for maintenance and upgrades (one has to remain competitive, in rhetorics at least), and the real costs of technology are rather much higher than approved… However, they offer the opportunity of playing with finance, via the residuals and margins discourse, and most important of all, to cut on the liability represented by personnel transforming it into an externality…

However, the account of the personnel is just as partial as the discourse on amortisation of purchases: personnel grows qualitatively, often exponentially relatively to the investments in continuing education, and can be part of learning organisations, which is not true for machine learning (let’s be reminded that today this is very far away from the inflated/hyped general artificial intelligence many imagine when thinking about deep learning and other machine learning approaches).

The challenge to us (and this is beyond journalism) is how to change this culture of accounting and management. Our challengers are human resource offices that essentially live by framing candidates and employees in cells construed by titles and certificates, financial and accounting offices that are tasked with maximising short term returns and producing histories of efficiency (as opposed to effectiveness), and technologists/solutionists who have built entire consultancy/entrepreneurial careers on just satisfying those counterparts, conflating their rhetorics, and lobbying for all this. It’s a scary inertia.

Some economists have started: William H. Janeway is attacking the supremacy of efficiency, pushing effectiveness again in the limelight; Mariana Mazzucato is attacking the entire language of value, pushing back the tsunami of finance that keeps hold of our economies since the 1970s…
But we can work to speed some things up, and one would be proposing an alternative frame for accounting, where emphasis of what is measured is put on effectiveness and value creation… Automatization will still find a place, righteously so (for the reason we each mentioned above), but it would not get coopted to satisfy those toxic logics… at least where the frame will be adopted :angel:

…but I am highjacking the thread now :stuck_out_tongue:

1 Like

@markomanka Maybe you could start another thread on the challenge of changing the culture of accounting and explain theories such as William H. Janeway and Mariana Mazzucato a bit further? I think that would be a very relevant and valuable discussion for many issues we are trying to tackle on this platform.

Writing from a white page tends not to be my cup of tea… but I will consider the option.
Till the next weekend :slight_smile:

Hi @markomanka @MariaEuler video of a talk by Mazzucato, maybe that’s a way to get others into a theories-of-value discussion

Art was born as a tool of propaganda – celebrate the king, or the Church. Michelangelo painted, but Julius II was paying, and he made sure he got what he wanted for his money. Art has many wonderful features, but independence from power is not one of them. Immortal masterpieces of the past start with an apology to some minor noble, who had the chief merit of paying for the artist’s upkeep.

A partial exception is art for the market. Verdi needed no patron, because he had a publisher (Ricordi) and he could fill the theater. But then, of course, you get commercial art, which also has many wonderful features, but has its own collective patron to keep happy.

And while we are at it, science also gravitated to power, and was fully captured in its orbit by the mid 19th century. :slight_smile:

1 Like

Interesting point of view… although by that narrative, the art born as propaganda was already art for the market in reality, since the artist had to pitch ideas and drafts for investments… Also more recent forms of arts for the market have been heavily subsidised for propaganda purposes (e.g. CIA and pop-art)…
Art, however, was not born in the middle age, and it actually predates the historic era tout court, being primarily born by the desire of humans to give shape to their reflections on themselves and the world… and “minor” forms of art have always existed in parallel to the idolised-and-paid-for forms, those minor form being the ones to look at, at each time, for art as a reflection on reality… until they were eventually acknowledged, made main stream and paid for, and new forms would start appearing :slight_smile:

…somewhat similarly science, hence Kuhn’s peculiar description of its evolutionary dynamics…
Science like art has a very peculiar way of periodically giving raise to new questions, and fields ( this is quite interesting and recent on the topic Research: Publication bias and the canonization of false facts | eLife ) and we could argue that Feyerabend and Kuhn should be much more cited than Popper is when talking about it… but I digress :angel:
One thing I could say of art and science, one thing that is similar of what was commented above of art and journalism, is that several times in history it was first art to capture a phenomenon, and only later science to try to make sense of it with the same tools that art had used…

1 Like

talking about AI and news, did you see this initiative?

The principle is straightforward: we pick a topic that is in the news and scan a selection of 50 sources ranging from large publications to small, specialized outlets. We collect a few hundred articles that are parsed through our home-brewed scoring algorithm. The model returns a spreadsheet that contains the URL of the story, the headline, the source, the word-count of the article, and the score. Then, our editor, Christopher Brennan manually removes any “noise,” typically false positive articles that are off-topic. He will also check for oddities, like a 3,000-word piece scoring only 1.8 (it is usually a large multi-topic news wrap up), or a 500-word article scoring a 4.1 (it could be a well-angled short piece from Quartz or Axios; it took us months to remove the misleading correlation of length and quality…). Finally, he will write a short text introducing the topic of the week and after a few checks from the team, we hit the “send” button.

1 Like

To the point about education: what seems to have fallen into less favor is emphasis on teaching critical thinking. It isn’t enough to tell people to make sure everyone is more mindful now that so much fakery is around us. I think it takes time to train oneself to question, and to know what to question, and how. My brother was a middle school teacher for more than 30 years and year by year that kind of education diminished while “teaching to the test” became more important. It caused him to retire early because that was not what he got into teaching for.


if the beginning of art were the cave paintings - then it was most likely not really born as propaganda, but as a very pure tool for human expression.

mmm I think they were functional - something to do with transcendence, which btw I think I heard was function of using gold leaf in church deco - functional art before it became expressive/decorative…

You are both right, I stand corrected. Let me reframe: art, as an activity, has its own beginnings and drivers. The existence of artists, a professional élite devoted to making art, can only arise when there is some surplus (someone has got to grow the food that the artists eat), and somebody who can commandeer such surplus in support of artists. Hence the close alignment of artists with powerful and rich people. The same can be said for professional scholars (but not learning per se), professional warriors (but not fighting per se) etc.

@natalia_skoczylas Just out from Witness Media Lab: “How do we work together to detect AI-manipulated media? Synthesis Report: Expert meeting and knowledge exchange between i) leading researchers in media forensics and detection of deepfakes and other new forms of AI-based media manipulation and ii) leading experts in social newsgathering, UGC and OSINT verification and fact-checking.”