We’ve Seen This Before
We’ve been here before. First with open source packages, then CI/CD, then infrastructure-as-code. Each time we optimized for speed and reuse, and only later realized the real risk wasn’t what we built, but what we pulled in.
Now it’s happening again. This time with “skills.”
Skills Are a Supply Chain
Skills are emerging as reusable units in the AI stack—installable capabilities executed by agents with access to tools, data, and decisions.
They can contain code. Which means the moment you install and execute them, you’ve created a supply chain.
Early Evidence, Familiar Patterns
A recent large-scale study analyzed more than 238,000 skills across marketplaces and GitHub and found a measurable fraction to be malicious [1]. The numbers are not dramatic, but they are real. Roughly half a percent of skills were confirmed malicious after filtering noise.
More importantly, the attack patterns are familiar. The same study identifies hijacking of skills hosted in abandoned GitHub repositories as an active attack vector [1].
In other words, this is not new risk. It is old risk in a new place.
The Difference Is Execution
What is new is how these components run.
Skills are not just libraries sitting in your build. They are instructions plus executable code, often running with the same privileges as the agent invoking them, and selected dynamically at runtime [2].
That changes the boundary. You are no longer just managing dependencies. You are allowing a system to choose and execute code on your behalf.
Why This Matters
Traditional controls assume stable systems: known dependencies, predictable execution paths, and validation at build time.
That model breaks here.
When selection is dynamic and execution happens at runtime, static analysis and dependency scanning still help—but they no longer describe the system you are actually running. Broader studies of the ecosystem already show a significant portion of skills contain security weaknesses, including supply chain-style vulnerabilities and privilege escalation paths [3].
This Is Still Fixable
None of this requires new principles.
Treat skills as untrusted code.
- Use only skills from trusted sources with security code scanning
- Limit what agents can do by default
- Isolate execution
- Require provenance
- Observe behavior at runtime
This is just software engineering discipline applied at the right boundary.
Final Thought
Skills are not just features, they are code executing on your behalf.
We’ve learned how to manage this before. The only question is how quickly we apply those lessons this time.
References
[1] Malicious or Not: Measuring the Security of Agent Skill Ecosystems. https://doi.org/10.48550/arXiv.2603.16572
[2] Malicious Agent Skills in the Wild: A Large-Scale Security Empirical Study. https://doi.org/10.48550/arXiv.2602.06547
[3] Agent Skills in the Wild: Vulnerabilities and Supply Chain Risks. https://doi.org/10.48550/arXiv.2601.10338
[4] On the Security of LLM Agents: Prompt Injection and Skill-Based Attacks. https://doi.org/10.48550/arXiv.2602.20156
