Are All Prompt Components Value-Neutral? Understanding the Heterogeneous Adversarial Robustness of Dissected Prompt in Large Language Models

📅 2025-08-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing prompt attack methods treat prompts as homogeneous text, ignoring their structural heterogeneity and the varying vulnerability of constituent components. Method: This paper establishes that prompt components are inherently non-neutral and introduces PromptAnatomy—a novel framework that anatomically decomposes prompts into functionally distinct, structured components; designs ComPerturb, a selective perturbation strategy targeting vulnerable components; and incorporates perplexity-based filtering to preserve linguistic coherence. Contribution/Results: We conduct human-annotated evaluation on four public benchmarks and achieve state-of-the-art attack success rates across five advanced large language models. Our work establishes the “structural anatomy–differential perturbation” paradigm, providing an interpretable, reproducible, component-level analysis toolkit for fine-grained robustness assessment.

Technology Category

Application Category

📝 Abstract
Prompt-based adversarial attacks have become an effective means to assess the robustness of large language models (LLMs). However, existing approaches often treat prompts as monolithic text, overlooking their structural heterogeneity-different prompt components contribute unequally to adversarial robustness. Prior works like PromptRobust assume prompts are value-neutral, but our analysis reveals that complex, domain-specific prompts with rich structures have components with differing vulnerabilities. To address this gap, we introduce PromptAnatomy, an automated framework that dissects prompts into functional components and generates diverse, interpretable adversarial examples by selectively perturbing each component using our proposed method, ComPerturb. To ensure linguistic plausibility and mitigate distribution shifts, we further incorporate a perplexity (PPL)-based filtering mechanism. As a complementary resource, we annotate four public instruction-tuning datasets using the PromptAnatomy framework, verified through human review. Extensive experiments across these datasets and five advanced LLMs demonstrate that ComPerturb achieves state-of-the-art attack success rates. Ablation studies validate the complementary benefits of prompt dissection and PPL filtering. Our results underscore the importance of prompt structure awareness and controlled perturbation for reliable adversarial robustness evaluation in LLMs. Code and data are available at https://github.com/Yujiaaaaa/PACP.
Problem

Research questions and friction points this paper is trying to address.

Understanding adversarial robustness of prompt components in LLMs
Addressing structural heterogeneity in prompt-based adversarial attacks
Developing automated framework for interpretable adversarial examples
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dissects prompts into functional components
Generates diverse adversarial examples selectively
Uses PPL-based filtering for plausibility
🔎 Similar Papers
No similar papers found.
Yujia Zheng
Yujia Zheng
Carnegie Mellon University
Machine LearningCausal Discovery and InferenceLatent Variable ModelsGenerative Models
T
Tianhao Li
Duke University
H
Haotian Huang
North China University of Technology
T
Tianyu Zeng
Hong Kong Polytechnic University
J
Jingyu Lu
Australian National University
Chuangxin Chu
Chuangxin Chu
Nanyang Tecnological University
AI agentLarge Language ModelTrustworthy AI
Y
Yuekai Huang
Institute of Software, Chinese Academy of Sciences, University of Chinese Academy of Sciences
Ziyou Jiang
Ziyou Jiang
Institute of Software Chinese Academy of Sciences
software engineering
Q
Qian Xiong
Beijing Forestry University
Yuyao Ge
Yuyao Ge
Chinese Academy of Sciences
Natural Language ProcessingGeneral AI
M
Mingyang Li
Institute of Software, Chinese Academy of Sciences, University of Chinese Academy of Sciences