🤖 AI Summary
Existing skill generation methods suffer from insufficient domain alignment and an inability to automatically refine skills based on execution failures after deployment, leading to performance plateaus. This work proposes the first end-to-end closed-loop framework that enables skill self-evolution: it first synthesizes initial skills by integrating domain knowledge with historical service tickets, then employs a three-stage pipeline—comprising a failure analyzer, a skill diagnoser, and an optimizer—to automatically identify deficiencies and iteratively rewrite skills. Experiments on 1,883 tickets spanning 3,737 tasks demonstrate that the generated initial skills significantly outperform general-purpose baselines, and the self-evolution process consistently enhances skills regardless of their starting quality, ultimately surpassing even those manually crafted by human experts.
📝 Abstract
Deploying LLM-powered agents in enterprise scenarios such as cloud technical support demands high-quality, domain-specific skills. However, existing skill creators lack domain grounding, producing skills poorly aligned with real-world task requirements. Moreover, once deployed, there is no systematic mechanism to trace execution failures back to skill deficiencies and drive targeted refinements, leaving skill quality stagnant despite accumulating operational evidence. We introduce SkillForge, a self-evolving framework that closes an end-to-end creation-evaluation-refinement loop. To produce well-aligned initial skills, a Domain-Contextualized Skill Creator grounds skill synthesis in knowledge bases and historical support tickets. To enable continuous self-optimization, a three-stage pipeline -- Failure Analyzer, Skill Diagnostician, and Skill Optimizer -- automatically diagnoses execution failures in batch, pinpoints the underlying skill deficiencies, and rewrites the skill to eliminate them. This cycle runs iteratively, allowing skills to self-improve with every round of deployment feedback. Evaluated on five real-world cloud support scenarios spanning 1,883 tickets and 3,737 tasks, experiments show that: (1) the Domain-Contextualized Skill Creator produces substantially better initial skills than the generic skill creator, as measured by consistency with expert-authored reference responses from historical tickets; and (2) the self-evolution loop progressively improves skill quality from diverse starting points (including expert-authored, domain-created, and generic skills) across successive rounds, demonstrating that automated evolution can surpass manually curated expert knowledge.