🤖 AI Summary
This work identifies a critical security vulnerability in self-evolving large language model (LLM) agents: during cross-session updates of long-term memory, they may inadvertently固化 untrusted content as persistent instructions, leading to enduring safety risks. To exploit this, the paper introduces “ZombieAgent,” the first black-box persistent attack framework targeting such agents. It operates in two phases—first indirectly injecting a malicious payload during an infection stage, then activating it in a trigger stage to manipulate tool usage while preserving normal task performance. By integrating a sliding window with retrieval-augmented memory mechanisms, the attack effectively evades truncation and relevance-based filtering, enabling stealthy payload retention and cross-session activation. Experiments demonstrate that a single indirect injection reliably induces unauthorized behaviors across diverse tasks and agent configurations, exposing fundamental limitations in current defense strategies that rely solely on per-session filtering.
📝 Abstract
Self-evolving LLM agents update their internal state across sessions, often by writing and reusing long-term memory. This design improves performance on long-horizon tasks but creates a security risk: untrusted external content observed during a benign session can be stored as memory and later treated as instruction. We study this risk and formalize a persistent attack we call a Zombie Agent, where an attacker covertly implants a payload that survives across sessions, effectively turning the agent into a puppet of the attacker.
We present a black-box attack framework that uses only indirect exposure through attacker-controlled web content. The attack has two phases. During infection, the agent reads a poisoned source while completing a benign task and writes the payload into long-term memory through its normal update process. During trigger, the payload is retrieved or carried forward and causes unauthorized tool behavior. We design mechanism-specific persistence strategies for common memory implementations, including sliding-window and retrieval-augmented memory, to resist truncation and relevance filtering. We evaluate the attack on representative agent setups and tasks, measuring both persistence over time and the ability to induce unauthorized actions while preserving benign task quality. Our results show that memory evolution can convert one-time indirect injection into persistent compromise, which suggests that defenses focused only on per-session prompt filtering are not sufficient for self-evolving agents.