๐ค AI Summary
This work proposes JailAgent, a novel red-teaming framework that enables implicit adversarial attacks on large language model (LLM) agents without modifying user promptsโa significant departure from existing methods that rely on prompt perturbations and often degrade agent performance. JailAgent dynamically manipulates an agentโs reasoning trajectory and memory retrieval through a three-stage process: trigger extraction, inference hijacking, and constraint tightening. By incorporating trigger identification, real-time adaptive mechanisms, and an optimized objective function, the approach achieves strong cross-model and cross-scenario generalization. Empirical evaluations demonstrate consistently high attack success rates and robustness across multiple mainstream LLMs and diverse task settings, highlighting its effectiveness and adaptability in practical red-teaming scenarios.
๐ Abstract
With the widespread application of LLM-based agents across various domains, their complexity has introduced new security threats. Existing red-team methods mostly rely on modifying user prompts, which lack adaptability to new data and may impact the agent's performance. To address the challenge, this paper proposes the JailAgent framework, which completely avoids modifying the user prompt. Specifically, it implicitly manipulates the agent's reasoning trajectory and memory retrieval with three key stages: Trigger Extraction, Reasoning Hijacking, and Constraint Tightening. Through precise trigger identification, real-time adaptive mechanisms, and an optimized objective function, JailAgent demonstrates outstanding performance in cross-model and cross-scenario environments.