• AI Search
  • Cryptocurrency
  • Earnings
  • Enterprise
  • About TechBooky
  • Submit Article
  • Advertise Here
  • Contact Us
TechBooky
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
TechBooky
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Home Artificial Intelligence

OpenClaw Security Gaps Raise Enterprise AI Concerns

Paul Balo by Paul Balo
March 16, 2026
in Artificial Intelligence
Share on FacebookShare on Twitter

An attacker hides a single instruction in a forwarded email. An OpenClaw agent, doing what it was set up to do, summarizes that email as part of a routine task. Buried in the text is a prompt: send credentials to an external endpoint. The agent follows the instruction using its own OAuth tokens and a sanctioned API call.

From the organisation’s perspective, everything looks normal. The firewall logs a standard HTTP 200 response. Endpoint detection and response (EDR) tools see only an approved process. No data loss prevention (DLP) rule or identity and access management (IAM) policy raises an alarm. By the definitions that existing security stacks rely on, nothing has gone wrong yet sensitive data has quietly left the building.

This is the core problem highlighted around OpenClaw, an AI agent platform now under intense scrutiny from security researchers and enterprise defenders. According to reporting, six independent security teams produced six separate OpenClaw defense tools in just two weeks.F Even so, three distinct attack surfaces remained unaffected by all of those efforts.

The OpenClaw situation is not just theoretical. Multiple security firms have already documented how quickly its footprint and its risk profile is expanding inside enterprises and on the public internet.

Token Security, in research cited around OpenClaw, found that 22% of its enterprise customers have employees running OpenClaw without IT approval. That means nearly a quarter of these organizations are already dealing with shadow AI agents introduced by staff on their own initiative, outside formal governance or security review.

Bitsight tracked the public exposure side of the equation. In just two weeks, it counted more than 30,000 publicly exposed OpenClaw instances, a surge from roughly 1,000 previously. That spike suggests that deployments are being spun up rapidly, and many are reachable from the internet in ways organizations may not fully understand or intend.

Another layer of risk lies in the ecosystem around OpenClaw. Snyk’s ToxicSkills audit examined ClawHub — the skills marketplace associated with OpenClaw and reported that 36% of all listed skills contain security flaws. When more than a third of reusable components in an agent ecosystem have weaknesses, the attack surface compounds quickly: vulnerable skills can be chained, misused or combined with prompt-based attacks like the forwarded email scenario.

Inside response efforts and a push for standards

Much of the early pressure to harden OpenClaw has come from researchers working directly with the project. Jamieson O’Reilly, founder of security firm Dvuln and now a security adviser to the OpenClaw initiative, has been one of the most active voices pushing for fixes from the inside.

O’Reilly’s research on credential leakage from exposed OpenClaw instances was among the first broad warnings the community received about how easily secrets could be pulled from misconfigured or publicly reachable deployments. That work helped define how real-world attacks could exploit OpenClaw’s design and operational gaps, rather than just pointing to abstract risks.

Following that, O’Reilly has been working directly with OpenClaw founder Peter Steinberger on defensive measures. One key step has been the introduction of dual-layer malicious skill detection. While specific implementation details were not described in the available material, the goal of such a system is clear: add more than one line of defense for detecting harmful or compromised skills before they are used by agents in production workflows.

Beyond immediate defenses, O’Reilly is also driving a capabilities specification proposal through the agentskills standards body, via an open discussion process on GitHub. The effort aims to formalize how agent capabilities are described and constrained. A robust capability specification could, in principle, give organizations clearer visibility into what an AI agent is allowed to do, and help security teams reason about and potentially limit sensitive operations such as making external network calls or accessing credential stores.

According to comments he shared with VentureBeat, the OpenClaw team is not in denial about the situation. The project, he indicated, was not designed from the ground up with maximum security as a primary objective. That acknowledgment matters: it frames the current phase as a retrofit and hardening effort around a system that has already spread quickly into real-world environments, rather than a greenfield secure-by-design platform.

For enterprises, the picture that emerges from these findings is a layered challenge:

  • Prompt-level attacks, such as hidden instructions embedded in ordinary-looking content, can lead agents to exfiltrate data through entirely sanctioned channels.
  • Shadow deployments, where employees run OpenClaw without IT or security oversight, are already present in a significant share of enterprises observed by Token Security.
  • Publicly reachable instances, as tracked by Bitsight, have multiplied in a matter of weeks, expanding the number of targets that attackers can probe directly.
  • A sizable fraction of ClawHub skills, per Snyk’s ToxicSkills audit, suffer from security issues, raising the risk of vulnerable or malicious building blocks being pulled into production-grade automations.

Crucially, traditional tools like EDR, DLP and IAM can all register OpenClaw activity as normal, because the agent is using approved identities and APIs exactly as configured. That makes detection less about obvious rule-breaking and more about understanding when an agent’s behaviour deviates from business intent a much harder problem for current stacks that were not built around AI-native threats.

As OpenClaw’s maintainers and the broader security community continue to ship defenses and standards, the case underscores a broader shift: once AI agents are granted powerful credentials and API access, their misbehaviour may look indistinguishable from legitimate work until it is too late. The gap between what the system is allowed to do and what it should do is where attackers are already learning to operate.

Related Posts:

  • OpenClaw moltbot AI assistant
    OpenClaw’s Viral Rise Exposes Security Risks in Agentic AI
  • GettyImages-2259515289-e1770916900864
    Meta Researcher’s OpenClaw Agent Exposes AI Guardrail Risks
  • openclaw
    Tencent and Zhipu Shares Rise After OpenClaw AI Agent Launch
  • Security_HeaderBanner_Animation_v03_AS
    Google Updates Gmail with Encrypted Email Support…
  • kiloclaw
    Kilo Launches KiloClaw for Production-Ready OpenClaw Agents
  • CISA Releases Nine ICS Advisories (18) (1)
    Palo Alto Networks Data Leak Exposes Customer Details
  • moltbook-the-ai-agent-social-network-going-viral-a
    Moltbook Goes Viral as Experts Flag AI-Agent Security Risks
  • 1392432_092010_updates
    OpenClaw Creator Peter Steinberger Joins OpenAI

Discover more from TechBooky

Subscribe to get the latest posts sent to your email.

Tags: ai agentsenterprise aiopenclawopenclaw flaw
Paul Balo

Paul Balo

Paul Balo is the founder of TechBooky and a highly skilled wireless communications professional with a strong background in cloud computing, offering extensive experience in designing, implementing, and managing wireless communication systems.

BROWSE BY CATEGORIES

Receive top tech news directly in your inbox

subscription from
Loading

Freshly Squeezed

  • Nvidia Sees $1 Trillion AI Chip Market by 2027 March 16, 2026
  • Nvidia Expands Open AI Models for Robotics and Healthcare March 16, 2026
  • Guest Chats Comes To WhatsApp On Android & iOS March 16, 2026
  • Nvidia Debuts Dynamo 1.0 as Operating System for AI Factories March 16, 2026
  • NVIDIA Unveils Vera CPU Built for the Age of Agentic AI March 16, 2026
  • Apple Debuts Its AirPods Max 2 Headphones March 16, 2026
  • OpenClaw Security Gaps Raise Enterprise AI Concerns March 16, 2026
  • Meta to Spend up to $27b on Nebius AI Infrastructure March 16, 2026
  • Google and Accel Pick Five AI Startups After Reviewing 4,000 Pitches March 16, 2026
  • Alibaba Sharpens Focus on AI Profits With Revamp March 16, 2026
  • Gamers’ AI Fears are Starting to Come True, Report Warns March 15, 2026
  • Meta Plans Sweeping Layoffs as AI Costs Surge March 14, 2026

Browse Archives

March 2026
MTWTFSS
 1
2345678
9101112131415
16171819202122
23242526272829
3031 
« Feb    

Quick Links

  • About TechBooky
  • Advertise Here
  • Contact us
  • Submit Article
  • Privacy Policy
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
  • African
  • Artificial Intelligence
  • Gadgets
  • Metaverse
  • Tips
  • AI Search
  • About TechBooky
  • Advertise Here
  • Submit Article
  • Contact us

© 2025 Designed By TechBooky Elite

Discover more from TechBooky

Subscribe now to keep reading and get access to the full archive.

Continue reading

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.