🤖 AI Summary
This work addresses the challenges of low learning efficiency, limited experience reuse, and poor generalization that hinder the self-evolution of large language model (LLM) agents. To overcome these limitations, the authors propose SkillX, a framework that establishes an automated pipeline for constructing a plug-and-play skill knowledge base reusable across agents and environments. The core innovations include a multi-level skill architecture, an iterative optimization and exploratory expansion mechanism, and techniques such as trajectory distillation, feedback-driven refinement, and active skill generation to enable automatic construction and continuous evolution of structured skill knowledge. Implemented with GLM-4.6, SkillX significantly improves both success rates and execution efficiency of weak base agents on long-horizon benchmarks including AppWorld, BFCL-v3, and τ²-Bench.
📝 Abstract
Learning from experience is critical for building capable large language model (LLM) agents, yet prevailing self-evolving paradigms remain inefficient: agents learn in isolation, repeatedly rediscover similar behaviors from limited experience, resulting in redundant exploration and poor generalization. To address this problem, we propose SkillX, a fully automated framework for constructing a \textbf{plug-and-play skill knowledge base} that can be reused across agents and environments. SkillX operates through a fully automated pipeline built on three synergistic innovations: \textit{(i) Multi-Level Skills Design}, which distills raw trajectories into three-tiered hierarchy of strategic plans, functional skills, and atomic skills; \textit{(ii) Iterative Skills Refinement}, which automatically revises skills based on execution feedback to continuously improve library quality; and \textit{(iii) Exploratory Skills Expansion}, which proactively generates and validates novel skills to expand coverage beyond seed training data. Using a strong backbone agent (GLM-4.6), we automatically build a reusable skill library and evaluate its transferability on challenging long-horizon, user-interactive benchmarks, including AppWorld, BFCL-v3, and $τ^2$-Bench. Experiments show that SkillKB consistently improves task success and execution efficiency when plugged into weaker base agents, highlighting the importance of structured, hierarchical experience representations for generalizable agent learning. Our code will be publicly available soon at https://github.com/zjunlp/SkillX.