SkillForge: Forging Domain-Specific, Self-Evolving Agent Skills in Cloud Technical Support

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing skill generation methods suffer from insufficient domain alignment and an inability to automatically refine skills based on execution failures after deployment, leading to performance plateaus. This work proposes the first end-to-end closed-loop framework that enables skill self-evolution: it first synthesizes initial skills by integrating domain knowledge with historical service tickets, then employs a three-stage pipeline—comprising a failure analyzer, a skill diagnoser, and an optimizer—to automatically identify deficiencies and iteratively rewrite skills. Experiments on 1,883 tickets spanning 3,737 tasks demonstrate that the generated initial skills significantly outperform general-purpose baselines, and the self-evolution process consistently enhances skills regardless of their starting quality, ultimately surpassing even those manually crafted by human experts.

Technology Category

Application Category

📝 Abstract
Deploying LLM-powered agents in enterprise scenarios such as cloud technical support demands high-quality, domain-specific skills. However, existing skill creators lack domain grounding, producing skills poorly aligned with real-world task requirements. Moreover, once deployed, there is no systematic mechanism to trace execution failures back to skill deficiencies and drive targeted refinements, leaving skill quality stagnant despite accumulating operational evidence. We introduce SkillForge, a self-evolving framework that closes an end-to-end creation-evaluation-refinement loop. To produce well-aligned initial skills, a Domain-Contextualized Skill Creator grounds skill synthesis in knowledge bases and historical support tickets. To enable continuous self-optimization, a three-stage pipeline -- Failure Analyzer, Skill Diagnostician, and Skill Optimizer -- automatically diagnoses execution failures in batch, pinpoints the underlying skill deficiencies, and rewrites the skill to eliminate them. This cycle runs iteratively, allowing skills to self-improve with every round of deployment feedback. Evaluated on five real-world cloud support scenarios spanning 1,883 tickets and 3,737 tasks, experiments show that: (1) the Domain-Contextualized Skill Creator produces substantially better initial skills than the generic skill creator, as measured by consistency with expert-authored reference responses from historical tickets; and (2) the self-evolution loop progressively improves skill quality from diverse starting points (including expert-authored, domain-created, and generic skills) across successive rounds, demonstrating that automated evolution can surpass manually curated expert knowledge.
Problem

Research questions and friction points this paper is trying to address.

domain-specific skills
skill alignment
execution failure tracing
skill refinement
cloud technical support
Innovation

Methods, ideas, or system contributions that make the work stand out.

self-evolving agents
domain-specific skills
skill refinement loop
failure diagnosis
LLM-powered automation
🔎 Similar Papers
No similar papers found.
X
Xingyan Liu
Alibaba Cloud Computing, Alibaba Group
X
Xiyue Luo
Alibaba Cloud Computing, Alibaba Group
Linyu Li
Linyu Li
Peking University
knowledge graphai4science
G
Ganghong Huang
Alibaba Cloud Computing, Alibaba Group
J
Jianfeng Liu
Alibaba Cloud Computing, Alibaba Group
H
Honglin Qiao
Alibaba Cloud Computing, Alibaba Group