Social media toxicity can’t be fixed by changing the algorithms
Experiments involving AI chatbots interacting on a simulated social media platform suggest efforts to design out antagonistic user behaviour will not succeed
By Chris Stokel-Walker
12 August 2025
Can social media’s problems be solved?
MoiraM / Alamy
The polarising impact of social media isn’t just the result of bad algorithms – it is inevitable because of the core components of how the platforms work, a study with AI-generated users has found. It suggests the problem won’t be fixed unless we fundamentally reimagine the world of online communication.
Petter Törnberg at the University of Amsterdam in the Netherlands and his colleagues set up 500 AI chatbots designed to mimic a range of political beliefs in the US, based on the American National Election Studies Survey. Those bots, powered by the GPT-4o mini large language model, were then instructed to interact with one another on a simple social network the researchers had designed with no ads or algorithms.
Read more
The truth about social media and screen time's impact on young people
During five runs of the experiment, each involving 10,000 actions, the AI agents tended to follow people with whom they shared political affiliations, while those with more partisan views gained more followers and reposts. This echoed overall attention towards those users, which gravitated towards more partisan posters.
In a previous study, Törnberg and his colleagues explored whether simulated social networks with different algorithms could identify routes to tamp down political polarisation – but the new research seems to contradict their earlier findings.
“We were expecting this [polarisation] to be something that’s driven by algorithms,” Törnberg says. “[We thought] that the platforms are designed for this – to produce these outcomes – because they are designed to maximise engagement and to piss you off and so on.”