Stop Fixating on Prompts: Reasoning Hijacking and Constraint Tightening for Red-Teaming LLM Agents

๐Ÿ“… 2026-04-07
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work proposes JailAgent, a novel red-teaming framework that enables implicit adversarial attacks on large language model (LLM) agents without modifying user promptsโ€”a significant departure from existing methods that rely on prompt perturbations and often degrade agent performance. JailAgent dynamically manipulates an agentโ€™s reasoning trajectory and memory retrieval through a three-stage process: trigger extraction, inference hijacking, and constraint tightening. By incorporating trigger identification, real-time adaptive mechanisms, and an optimized objective function, the approach achieves strong cross-model and cross-scenario generalization. Empirical evaluations demonstrate consistently high attack success rates and robustness across multiple mainstream LLMs and diverse task settings, highlighting its effectiveness and adaptability in practical red-teaming scenarios.
๐Ÿ“ Abstract
With the widespread application of LLM-based agents across various domains, their complexity has introduced new security threats. Existing red-team methods mostly rely on modifying user prompts, which lack adaptability to new data and may impact the agent's performance. To address the challenge, this paper proposes the JailAgent framework, which completely avoids modifying the user prompt. Specifically, it implicitly manipulates the agent's reasoning trajectory and memory retrieval with three key stages: Trigger Extraction, Reasoning Hijacking, and Constraint Tightening. Through precise trigger identification, real-time adaptive mechanisms, and an optimized objective function, JailAgent demonstrates outstanding performance in cross-model and cross-scenario environments.
Problem

Research questions and friction points this paper is trying to address.

red-teaming
LLM agents
prompt modification
reasoning trajectory
security threats
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reasoning Hijacking
Constraint Tightening
Prompt-Free Red-Teaming
LLM Agent Security
Trigger Extraction
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yanxu Mao
School of Software, Henan University, China
P
Peipei Liu
Institute of Information Engineering, Chinese Academy of Sciences, China
T
Tiehan Cui
School of Software, Henan University, China
C
Congying Liu
University of Chinese Academy of Sciences, China
Mingzhe Xing
Mingzhe Xing
Peking University
AI AgentAI for Software EngineeringAI for System
D
Datao You
School of Software, Henan University, China