Swarms of artificial intelligence Agents (AI) could soon invade social media platforms en masse to spread fake stories, harass users and undermine democracy, researchers warn.
These “artificial intelligence swarms” will form part of a new frontier in information warfare, capable of mimicking human behavior to avoid detection while creating the illusion of authentic online movement, according to a commentary published Jan. 22 in the journal Science.
“People, generally speaking, are conformist,” the co-author commented Jonas Kunstprofessor of communication at BI Norwegian Business School in Norway, told Live Science. “We often don’t want to agree, and people differ to some degree, but all things being equal, we tend to believe that what most people do has some value. That’s something these swarms can carry relatively easily.”
And if you don’t get carried away with the herd, the swarm could also be a harassment tool that discourages arguments that undermine the AI narrative, the researchers argued. For example, a swarm could mimic an angry mob to target individuals with dissenting views and drive them off the platform.
The researchers do not provide a timeline for the invasion of AI swarms, so it is unclear when the first agents will arrive on our resources. However, they noted that the swarms would be difficult to detect, and thus the extent to which they may have already been deployed is unknown. For many, signs of the growing influence of robots on social networks are already obviouswhile dead internet conspiracy theory — that bots are responsible for most online activity and content creation — has been gaining traction over the past few years.
Shepherding the flock
The researchers warn that the emerging risk of an AI swarm is compounded by long-standing vulnerabilities in our digital ecosystems, already weakened by what they describe as “an erosion of rational-critical discourse and a lack of shared reality among citizens.”
Anyone who uses social media knows that it has become a very divisive place. The online ecosystem is also already littered with automated bots – non-human accounts at the behest of computer software that more than half of all web traffic. Conventional bots are usually only able to perform simple tasks over and over again, such as sending the same incendiary message. They can still do harm, spread false information and inflate false stories, but they are usually quite easy to detect and rely on people to coordinate on a large scale.
Next-generation AI swarms, on the other hand, are coordinated by large language models (LLMs) – the artificial intelligence systems behind popular chatbots. With LLM at the helm, the swarm will be sophisticated enough to adapt to the online communities it infiltrates, installing collections of different personalities that preserve memory and identity, the commentary says.
“We’re talking about it as a kind of organism that’s self-sufficient, that can coordinate, that can learn, that can adapt over time, and that’s why it specializes in exploiting human vulnerabilities,” Kunst said.
This mass manipulation is far from hypothetical. Last year, Reddit threatened legal action against researchers who used it AI chatbots in the experiment of manipulating the opinions of four million users in your favorite r/changemyview forum. According to the researchers’ preliminary findings, their chatbots’ responses were three to six times more persuasive than responses from human users.
A swarm can contain hundreds, thousands – or even a million – AI agents. Kunst noted that the number changes with computing power and will also be limited by restrictions social media companies may put in place to combat swarms.
But it’s not all about the number of agents. Swarms could target local community groups that would be suspicious of a sudden influx of new users. In this scenario, only a few agents would be deployed. The researchers also noted that because swarms are more sophisticated than traditional robots, they can have more impact with fewer numbers.
“I think the more sophisticated these robots are, the less you actually need,” comments the lead author Daniel Schroedera researcher at the technology research organization SINTEF in Norway told Live Science.
Protection against next-generation robots
Agents can boast a head start on discussions with real users as they can post 24/7 for as long as it takes for their narrative to take hold. The researchers added that in a “cognitive war”, the relentlessness and persistence of artificial intelligence can be used as a weapon against limited human effort.
Social media companies want real users on their platforms, not AI agents, so researchers predict companies will respond to AI swarms with improved account authentication — forcing users to prove they’re real people. However, the researchers also highlighted some problems with the approach, saying it could discourage political dissent in countries where people rely on anonymity to speak out against governments. Authentic accounts can also be hijacked or acquired, further complicating the situation. Still, the researchers noted that strengthening authentication would make things more difficult and expensive for those looking to deploy AI swarms.
The researchers also proposed additional protections against swarms, such as scanning live traffic for statistically anomalous patterns that could represent AI swarms, and establishing an “AI Influence Observatory” ecosystem in which academic groups, NGOs, and other institutions can study, raise awareness, and respond to the threat of AI swarms. Basically, scientists want to get in front of the problem before it can disrupt elections and other big events.
“We are warning with reasonable certainty of future developments that could indeed have disproportionate consequences for democracy, and we need to start preparing for that,” Kunst said. “We need to be proactive rather than waiting for the first type of larger events to be negatively impacted by AI swarms.”

Leave a Reply