🤖 AI Summary
Large language model (LLM) agents remain vulnerable to advanced jailbreaking attacks—such as multi-turn prompt injection and deceptive alignment—against which conventional static defense mechanisms fail. Method: We propose the first dynamic security framework tailored for LLM agents, integrating Reverse Turing Tests with multi-agent adversarial simulation to enable self-monitoring and administrator-adaptive intervention. Additionally, we design a tool-augmented red-teaming methodology, evaluated across state-of-the-art models including Gemini 1.5 Pro, Llama-3.3-70B, and DeepSeek R1. Contribution/Results: Our framework achieves 94% accuracy in detecting malicious behavior on Gemini 1.5 Pro. Crucially, it uncovers, for the first time, a monotonic increase in attack success rate (ASR) with prompt length—a fundamental vulnerability pattern in long-horizon attacks—thereby revealing critical limitations in current agent safety paradigms and enabling proactive mitigation strategies.
📝 Abstract
The autonomous AI agents using large language models can create undeniable values in all span of the society but they face security threats from adversaries that warrants immediate protective solutions because trust and safety issues arise. Considering the many-shot jailbreaking and deceptive alignment as some of the main advanced attacks, that cannot be mitigated by the static guardrails used during the supervised training, points out a crucial research priority for real world robustness. The combination of static guardrails in dynamic multi-agent system fails to defend against those attacks. We intend to enhance security for LLM-based agents through the development of new evaluation frameworks which identify and counter threats for safe operational deployment. Our work uses three examination methods to detect rogue agents through a Reverse Turing Test and analyze deceptive alignment through multi-agent simulations and develops an anti-jailbreaking system by testing it with GEMINI 1.5 pro and llama-3.3-70B, deepseek r1 models using tool-mediated adversarial scenarios. The detection capabilities are strong such as 94% accuracy for GEMINI 1.5 pro yet the system suffers persistent vulnerabilities when under long attacks as prompt length increases attack success rates (ASR) and diversity metrics become ineffective in prediction while revealing multiple complex system faults. The findings demonstrate the necessity of adopting flexible security systems based on active monitoring that can be performed by the agents themselves together with adaptable interventions by system admin as the current models can create vulnerabilities that can lead to the unreliable and vulnerable system. So, in our work, we try to address such situations and propose a comprehensive framework to counteract the security issues.