🤖 AI Summary
This work addresses the limited reusability and governability of procedural capabilities in large language model (LLM) agents when performing long-horizon tasks, a challenge exacerbated by existing tool-calling mechanisms that struggle to support cross-task generalization. The paper introduces, for the first time, seven system-level design patterns for skills alongside an orthogonal “representation × scope” taxonomy, and constructs a comprehensive skill lifecycle framework encompassing metadata encapsulation, executable code, self-evolving libraries, marketplace-based distribution, and trust-tiered execution. Through benchmark evaluations and case studies, the authors demonstrate that structured skill representations significantly improve task success rates, while also uncovering critical risks: performance degradation in self-generated skills and severe security vulnerabilities within the skill supply chain.
📝 Abstract
Agentic systems increasingly rely on reusable procedural capabilities, \textit{a.k.a., agentic skills}, to execute long-horizon workflows reliably. These capabilities are callable modules that package procedural knowledge with explicit applicability conditions, execution policies, termination criteria, and reusable interfaces. Unlike one-off plans or atomic tool calls, skills operate (and often do well) across tasks. This paper maps the skill layer across the full lifecycle (discovery, practice, distillation, storage, composition, evaluation, and update) and introduces two complementary taxonomies. The first is a system-level set of \textbf{seven design patterns} capturing how skills are packaged and executed in practice, from metadata-driven progressive disclosure and executable code skills to self-evolving libraries and marketplace distribution. The second is an orthogonal \textbf{representation $\times$ scope} taxonomy describing what skills \emph{are} (natural language, code, policy, hybrid) and what environments they operate over (web, OS, software engineering, robotics). We analyze the security and governance implications of skill-based agents, covering supply-chain risks, prompt injection via skill payloads, and trust-tiered execution, grounded by a case study of the ClawHavoc campaign in which nearly 1{,}200 malicious skills infiltrated a major agent marketplace, exfiltrating API keys, cryptocurrency wallets, and browser credentials at scale. We further survey deterministic evaluation approaches, anchored by recent benchmark evidence that curated skills can substantially improve agent success rates while self-generated skills may degrade them. We conclude with open challenges toward robust, verifiable, and certifiable skills for real-world autonomous agents.