SkillAttack: Automated Red Teaming of Agent Skills through Attack Path Refinement

๐Ÿ“… 2026-04-05
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the security risks posed by non-malicious skills in open skill registries, which may harbor latent vulnerabilities exploitable through adversarial promptsโ€”risks that existing detection methods struggle to identify. The authors propose SkillAttack, a red-teaming framework that, for the first time, enables fully automated discovery and exploitation of such vulnerabilities without modifying skill code, relying solely on adversarial prompting. SkillAttack integrates vulnerability analysis, parallel attack generation, and feedback-driven optimization of attack trajectories within a closed-loop search mechanism. Evaluation across 171 skills deployed on ten mainstream large language models demonstrates attack success rates ranging from 0.73 to 0.93 on adversarially crafted skills and up to 0.26 on real-world skills, significantly outperforming current baselines.
๐Ÿ“ Abstract
LLM-based agent systems increasingly rely on agent skills sourced from open registries to extend their capabilities, yet the openness of such ecosystems makes skills difficult to thoroughly vet. Existing attacks rely on injecting malicious instructions into skills, making them easily detectable by static auditing. However, non-malicious skills may also harbor latent vulnerabilities that an attacker can exploit solely through adversarial prompting, without modifying the skill itself. We introduce SkillAttack, a red-teaming framework that dynamically verifies skill vulnerability exploitability through adversarial prompting. SkillAttack combines vulnerability analysis, surface-parallel attack generation, and feedback-driven exploit refinement into a closed-loop search that progressively converges toward successful exploitation. Experiments across 10 LLMs on 71 adversarial and 100 real-world skills show that SkillAttack outperforms all baselines by a wide margin (ASR 0.73--0.93 on adversarial skills, up to 0.26 on real-world skills), revealing that even well-intended skills pose serious security risks under realistic agent interactions.
Problem

Research questions and friction points this paper is trying to address.

agent skills
adversarial prompting
vulnerability exploitation
red teaming
LLM security
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial prompting
red teaming
agent skills
attack path refinement
vulnerability exploitation
๐Ÿ”Ž Similar Papers
No similar papers found.