For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
Misinformation is playing an increasingly significant role on social media. Kamila Koronska, research officer at the University of Amsterdam, is working to counter this development through her research. “Unfortunately, it’s easier to inject chaos into the system than it is to remove it.”

As a researcher, Koronska is involved in the development of the Vera AI project, which aims to detect misinformation and serve as a verification tool. “Think, for example, of recognizing deepfakes or fake images,” she explains. “The tools we are developing are intended for use by journalists, researchers, and academics.”

The system is being developed through a multidisciplinary approach. “My colleagues focus on the technical side – they are training the system to recognize misinformation. I myself study the impact of disinformation on society. One method we're currently testing involves analyzing how deeply sanctioned news sources infiltrate mainstream media. This allows us to identify vulnerabilities in media systems across specific European countries. We refer to this technique as post-truth mapping.”

A Societal Issue

According to Koronska, the consequences of misinformation are significant. It affects how we perceive traditional media and can disrupt democratic processes. “In our research, we are analyzing the riots in the United Kingdom in 2024. These were the largest disturbances in years, and there are clear signs that they were fueled by disinformation. We are trying to trace how these riots started and how different groups responded to them.”

Identifying who is responsible for spreading fake news is a complex issue, Koronska says. “Of course, there are the people who create the posts, but how should we deal with the platforms where this content is shared? And what role do the algorithms play that people get caught up in?”

The spread of false information is also a societal problem, according to Koronska. “People seem to feel less responsible for what they say, but also for what they think. People are often willing to believe anything they find convenient or comforting, without considering the consequences of their susceptibility to certain viewpoints. The internet facilitates this – it functions as a public space but isn’t experienced as such because everyone can remain anonymous,” Koronska says. “And then there’s the situation that almost seems like a law of nature: sadly, it’s easier to inject chaos into the system than it is to get it out.”

Taking Action

Nevertheless, Koronska believes there is plenty more that can be done to make people more resilient to disinformation. “Research shows that people who lack access to good information, or find it difficult to access, are more susceptible to consuming disinformation. Ensuring that people continue to have access to good, reliable information is therefore more important than ever.” She also points to the role of governments. “Policymakers must pressure social media platforms to take responsibility. Simply blocking accounts is treating the symptoms. We need to look at the entire infrastructure behind the spread of disinformation.”