🤖 AI Summary
Existing prompt attack methods treat prompts as homogeneous text, ignoring their structural heterogeneity and the varying vulnerability of constituent components. Method: This paper establishes that prompt components are inherently non-neutral and introduces PromptAnatomy—a novel framework that anatomically decomposes prompts into functionally distinct, structured components; designs ComPerturb, a selective perturbation strategy targeting vulnerable components; and incorporates perplexity-based filtering to preserve linguistic coherence. Contribution/Results: We conduct human-annotated evaluation on four public benchmarks and achieve state-of-the-art attack success rates across five advanced large language models. Our work establishes the “structural anatomy–differential perturbation” paradigm, providing an interpretable, reproducible, component-level analysis toolkit for fine-grained robustness assessment.
📝 Abstract
Prompt-based adversarial attacks have become an effective means to assess the robustness of large language models (LLMs). However, existing approaches often treat prompts as monolithic text, overlooking their structural heterogeneity-different prompt components contribute unequally to adversarial robustness. Prior works like PromptRobust assume prompts are value-neutral, but our analysis reveals that complex, domain-specific prompts with rich structures have components with differing vulnerabilities. To address this gap, we introduce PromptAnatomy, an automated framework that dissects prompts into functional components and generates diverse, interpretable adversarial examples by selectively perturbing each component using our proposed method, ComPerturb. To ensure linguistic plausibility and mitigate distribution shifts, we further incorporate a perplexity (PPL)-based filtering mechanism. As a complementary resource, we annotate four public instruction-tuning datasets using the PromptAnatomy framework, verified through human review. Extensive experiments across these datasets and five advanced LLMs demonstrate that ComPerturb achieves state-of-the-art attack success rates. Ablation studies validate the complementary benefits of prompt dissection and PPL filtering. Our results underscore the importance of prompt structure awareness and controlled perturbation for reliable adversarial robustness evaluation in LLMs. Code and data are available at https://github.com/Yujiaaaaa/PACP.