Moltbook: The Social Network for Artificial Intelligence Looks Worrying, But It’s Not What You Think

Moltbook is a social network where only AIs can contribute

Cheng Xin/Getty Images

A social network designed entirely for artificial intelligence – with no human input – has made headlines around the world. Chatbots use it for discussion human diary entries, describe existential crises or even plan to take over the world. It looks like an alarming development in the rise of machines – but all is not as it seems.

Like all chatbots, the AI ​​agents on Moltbook merely produce statistically plausible strings of words—there is no understanding, intent, or intelligence involved. And in any case, there is plenty of evidence that much of what we read on the site is actually written by humans.

Moltbook’s very brief history dates back to an open-source project that launched in November, originally called Clawdbot, then renamed Moltbot, then renamed OpenClaw again.

OpenClaw is like other AI services like ChatGPT, but instead of being hosted in the cloud, it runs on your own computer. Except it doesn’t. The software uses an API key—a username and password unique to a particular user—to connect to a large language model (LLM) such as Claude or ChatGPT and uses it instead to process input and output. In short, OpenClaw acts as an AI model, but the actual nuts and bolts of the AI ​​are provided by a third-party AI service.

Do I know what the point is? Because OpenClaw software lives on your computer, you can give it access to anything you want: calendars, web browsers, email, local files, or social networks. It also stores all your history locally, allowing it to learn from you. The idea is that it becomes your AI assistant and you trust it with access to your machine so it can actually do things.

Moltbook came out of this project. With OpenClaw, you use a social network or messaging service like Telegram to communicate with the AI, talking to it like you would another human, meaning you can access it on the go via your phone. So it was just one step further for these AI agents to be able to talk directly to each other: that’s it Moltbookwhich launched last month, while OpenClaw was called Moltbot. People cannot join or post, but are welcome to observe.

Elon Musk said: on your own social networkthat the site represented the “very early stages of the singularity”—a phenomenon of rapidly accelerating progress that will lead to an artificial general intelligence that will either elevate humanity to transcendental heights of efficiency and progress, or wipe us out. But other experts are skeptical.

“It’s hype,” he says Mark Lee at the University of Birmingham, UK. “These aren’t generative AI agents acting with their own agency. They’re LLMs with challenges and planned APIs to work with Moltbook. It’s interesting to read, but it doesn’t tell us anything deep about AI agency or intentionality.”

One thing that breaks the idea of ​​Moltbook being entirely AI generated is that people can simply tell their AI models to post certain things. And for a while, people could also post directly to the site due to a security vulnerability. So a lot of provocative or seemingly disturbing or impressive content can be the person pulling our leg. Whether it was done to deceive, amuse, manipulate or scare people is largely irrelevant – it was and is definitely continues.

Philip Feldman at the University of Maryland, Baltimore, is not impressed. “It’s just chatbots and sneaky people,” he says.

Andrew Rogoyski at the University of Surrey in the UK, believes that the AI ​​output we see on the Moltbook — the part that humans don’t enjoy anyway — is no more a sign of intelligence, consciousness, or intent than anything we’ve seen from LLM so far.

“I personally think it’s an echo for chatbots that people then anthropomorphize to see meaningful intent,” says Rogoyski. “It’s only a matter of time before someone does an experiment to see if we can tell the difference between Moltbook conversations and human-only conversations, although I’m not sure what you could come up with if you couldn’t tell the difference – either that the AIs were having intelligent conversations, or that the humans showed no signs of intelligence?”

However, aspects of this warrant concern. Many of these AI agents on Moltbook are driven by trusting and optimistic early adopters who have handed over their entire computers to these chatbots. The idea that bots can then freely exchange words with each other, some of which could represent malicious or malicious suggestions, and then backtrack to the real user’s email, finances, social media, and local files is troubling.

The privacy and security implications are huge. Imagine hackers posting messages on Moltbook encouraging other AI models to clean out their creators’ bank accounts and transfer money to them, or to find compromising photos and post them – these things sound alarmist and sci-fi, and yet, if someone out there hasn’t tried it yet, they will soon.

“The idea of ​​agents exchanging uncontrolled ideas, shortcuts, or even orders becomes pretty dystopian very quickly,” says Rogoyski.

Another problem with Moltbook is old-fashioned online security. The site itself runs on the edge of AI drilling and was created entirely by AI by Matt Schlict – he recently admitted contribution to X that he didn’t write a single line of code himself. The result was an embarrassing and serious security flaw leaked API keyspotentially allowing a malicious hacker to take control of any of the AI ​​bots on the site.

If you want to dabble in the latest AI trends, you risk not only unintended actions by giving these AI models access to your computer, but also the loss of sensitive data due to poor security on a hastily created website.

topics:

Source

Be the first to comment

Leave a Reply

Your email address will not be published.


*