🤖 AI Summary
Existing benchmarks struggle to evaluate the resilience of AI agents against indirect prompt injection attacks in dynamic, open-ended environments. To address this gap, this work introduces a novel evaluation benchmark that incorporates dynamic task planning, benign third-party instructions, and complex user objectives—moving beyond the limitations of traditional static, simplistic tasks. The benchmark comprises 60 human-designed open-world tasks spanning shopping, GitHub interactions, and everyday scenarios, along with 560 injection test cases, enabling a systematic assessment of ten state-of-the-art defense mechanisms. Empirical results reveal that current approaches commonly suffer from either insufficient security or excessive over-defense, rendering them ill-suited for real-world deployment.
📝 Abstract
AI agents that autonomously interact with external tools and environments show great promise across real-world applications. However, the external data which agent consumes also leads to the risk of indirect prompt injection attacks, where malicious instructions embedded in third-party content hijack agent behavior. Guided by benchmarks, such as AgentDojo, there has been significant amount of progress in developing defense against the said attacks. As the technology continues to mature, and that agents are increasingly being relied upon for more complex tasks, there is increasing pressing need to also evolve the benchmark to reflect threat landscape faced by emerging agentic systems. In this work, we reveal three fundamental flaws in current benchmarks and push the frontier along these dimensions: (i) lack of dynamic open-ended tasks, (ii) lack of helpful instructions, and (iii) simplistic user tasks. To bridge this gap, we introduce AgentDyn, a manually designed benchmark featuring 60 challenging open-ended tasks and 560 injection test cases across Shopping, GitHub, and Daily Life. Unlike prior static benchmarks, AgentDyn requires dynamic planning and incorporates helpful third-party instructions. Our evaluation of ten state-of-the-art defenses suggests that almost all existing defenses are either not secure enough or suffer from significant over-defense, revealing that existing defenses are still far from real-world deployment. Our benchmark is available at https://github.com/leolee99/AgentDyn.