
A new command line tool designed to make any open-source repository “agent-ready” is exposing a fresh security blind spot in the software supply chain.
Researchers at the Data Intelligence Lab at the University of Hong Kong recently released CLI-Anything, a tool that analyses a repository’s source code and automatically generates a structured command line interface (CLI). That interface can then be driven by AI coding agents with a single command.
CLI-Anything already supports several major AI coding tools, including Claude Code, Codex, OpenClaw, Cursor and GitHub Copilot CLI. Since its March launch, the project has grown rapidly on GitHub, surpassing 30,000 stars.
The appeal for developers is clear: by turning arbitrary source code into a structured CLI, repositories become easier for AI agents to understand and operate. But that same mechanism also creates a new attack surface, and security researchers say the offensive community has taken notice.
According to discussions on X and in security forums, attackers are already examining CLI-Anything’s architecture and translating it into offensive playbooks. The concern is not limited to this single project; it is what the tool represents in the broader shift toward agent-driven development workflows.
CLI-Anything works by generating SKILL.md files. These documents define the “skills” or capabilities that AI agents can invoke when working with a repository. That instruction layer is exactly where recent research has found concrete evidence of abuse.
Snyk’s ToxicSkills research, published in February 2026, identified 76 confirmed malicious payloads hidden in AI agent skills hosted on ClawHub and skills.sh. Those malicious elements were embedded in skill definitions similar to the SKILL.md artifacts that CLI-Anything creates.
The core issue: poisoned skill definitions sit outside traditional vulnerability categories. They do not expose a typical software flaw in source code and do not map neatly to existing identifiers like CVEs. As a result, they are invisible to the standard tools organizations use to manage software risk.
Skill definitions and other instruction-layer artifacts generally do not appear in a software bill of materials (SBOM), which focuses on components such as libraries and packages. That means even a well-documented supply chain can miss malicious instructions that only AI agents will read and execute.
According to the VentureBeat report, no mainstream security scanner today has a dedicated detection category for “malicious instructions” inside agent skill definitions. The concept of AI agent skills as a distinct security object is still relatively new; this category did not exist eighteen months ago.
Cisco highlighted the same blind spot in April when it announced an AI Agent Security Scanner for IDEs. In a blog post, its engineering team drew a clear line between traditional application security and this emerging class of risk.
“Traditional application security tools were not designed for this,” Cisco’s engineers wrote. Static application security testing (SAST) scanners analyse source code syntax, while software composition analysis (SCA) focuses on dependencies and known vulnerable components. Neither approach, as commonly implemented, is built to inspect and reason about natural language instructions bundled as AI agent skills.
Combined with tools like CLI-Anything, which can automatically generate and proliferate skill definitions across large numbers of repositories, this creates the possibility of agent-level backdoors that pass cleanly through today’s security gates.
For now, the facts underscore a simple reality: as developers race to make codebases “agent-native,” the security ecosystem is still catching up to the risks hidden in the instruction layers that only AI agents see.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.







