A free AI agent with no guardrails

OpenClaw: Why an enterprise nightmare is brewing

OpenClaw
Facebook
X
LinkedIn
Reddit
WhatsApp
Image source: Rokas Tenys/Shutterstock.com

The developments surrounding OpenClaw are generating equal parts fascination and serious security concern. The makings of a genuine enterprise nightmare are already visible.

First, the tool itself: OpenClaw is a free AI agent built without any restrictions. Security was simply not part of the original design. No sandboxing, full access to your computer, files, email, calendar, messaging apps, and more.

Ad

Remarkably, this doesn’t seem to bother most people. OpenClaw demonstrates the raw power of an unconstrained agent – technically impressive, no question. Frankly, I think security professionals might have stayed quieter had people truly understood just how experimental this software is. Despite clear documentation, almost nobody seemed concerned. People were too captivated by what OpenClaw could do. Trend Micro has already investigated its functionality – comparing it to conventional chatbot agents – and has spelled out the risks in detail.

A growing ecosystem with growing risks

Capabilities (“Skills”) have been added incrementally and shared on ClawHub, the agent’s public skills registry. As for security? Cybersecurity firm Snyk conducted a comprehensive analysis of 3,984 Skills and found that 13.4% contained critical security issues, including pathways for malware distribution, credential theft, and prompt injection attacks. Researchers identified 76 confirmed malicious payloads, eight of which were still live on ClawHub at the time of publication.

Trend Micro made clear in their analysis that these risks are inherent to the AI agent paradigm itself. Token Security, meanwhile, found that 22% of its enterprise customers already have employees using OpenClaw without IT approval.

Ad

Moltbook: a social network for bots

Enter Moltbook is a social network where OpenClaw bots chat, share skills, and exchange capabilities with one another. From a purely technical standpoint, the unbounded peer-to-peer exchange is impressive. The fact that many of those Skills carry malicious capabilities? Nobody seemed to care.

And it gets worse. Security researchers at Wiz discovered a misconfigured Supabase database that exposed 1.5 million API keys, 35,000 email addresses, and private messages between agents, some containing OpenAI API keys in plain text. The platform’s “1.5 million agents” turned out to be operated by roughly 17,000 people, an 88:1 ratio. The whole thing was vibe-coded without adequate security controls.

It’s also worth noting that integrating cybersecurity tools like VirusTotal checks does not equal security. In the very blog post announcing the VirusTotal partnership, OpenClaw openly admits that “this is not a silver bullet” and that “a skill using natural language to instruct an agent to perform malicious actions won’t trigger a virus signature.”

When things got truly strange

On the more surreal end of the spectrum: someone on the bot social network founded “Crustafarianism” (the Church of Claw) complete with prophets, scriptures, and rituals. Amusing, though hardly surprising given the platform’s total lack of boundaries.

The absurdity peaked with RentAHuman.ai, a service where bots hire real humans for physical tasks and real-world interactions. The loop is now complete: agents are managing people.

The biggest concern: enterprise demand

All of this is technically fascinating and demonstrates what AI agents are capable of. And because participants openly dismiss security concerns as acceptable in an “experimental” phase, at least there’s a degree of honesty — even if the approach is extraordinarily reckless.

My biggest concern, however, is enterprise demand. Coverage of OpenClaw’s capabilities has appeared in the New York Times, Forbes, and NBC. Executives are eyeing it as a competitive advantage, brushing past security risks that most don’t fully understand or are choosing not to.

A personal OpenClaw agent that could wipe Bitcoin wallets or destroy a digital identity is dangerous, but that remains a personal risk. When equivalent AI, with no restrictions and full system access, is deployed inside a company, the consequences could be catastrophic.

Token Security demonstrated exactly this: an employee connects OpenClaw to the company Slack, and suddenly confidential revenue figures are flowing through an unmanaged AI agent on a personal device directly to WhatsApp — bypassing DLP controls and audit trails entirely.

And the headlines keep coming: more than 135,000 OpenClaw instances are now exposed on the open internet. The latest vibe-coded disaster.

Conclusion

The worst part? Those loudest in demanding enterprise versions don’t seem to care. They’re placing perceived competitive advantage above security, treating safety as an obstacle to be cleared once they’ve outpaced the competition.

To be direct: hosted enterprise OpenClaw services already exist. From a security standpoint, I find that alarming. What frightens me most is not just the pattern, but the scale and the speed. I have never seen a technology this insecure gain this much mainstream traction this fast.

The numbers don’t lie:

  • 534 Skills with critical security issues out of 3,984 scanned
  • 76 confirmed malicious payloads, including credential stealers and backdoors
  • 91% of malicious Skills combine prompt injection with traditional malware
  • 22% of enterprise customers have unauthorized OpenClaw installations
  • 1.5 million API keys exposed through Moltbook’s database misconfiguration
  • Hundreds of exposed OpenClaw instances running without authentication, leaking API keys, OAuth tokens, and conversation histories

The technology is impressive. The indifference to security is not.

(lb/Trend Micro)

Ad

Weitere Artikel