🤖 AI Summary
This work addresses the critical challenge of uncontrolled knowledge retention in large language model (LLM) agents, which can accumulate sensitive or outdated information, posing significant privacy and security risks. The study introduces the first systematic characterization of multi-granular forgetting scenarios—spanning states, trajectories, and environments—and proposes a natural language–based directed forgetting framework that automatically translates high-level forgetting requests into executable prompts to selectively erase targeted knowledge from the agent. To evaluate forgetting efficacy, the authors devise a novel assessment methodology combining adversarial inference attacks with behavioral observation. Experimental results demonstrate that the proposed approach effectively removes specified information, substantially reducing an attacker’s inference success rate while preserving performance on unrelated tasks.
📝 Abstract
Large language model (LLM)-based agents have recently gained considerable attention due to the powerful reasoning capabilities of LLMs. Existing research predominantly focuses on enhancing the task performance of these agents in diverse scenarios. However, as LLM-based agents become increasingly integrated into real-world applications, significant concerns emerge regarding their accumulation of sensitive or outdated knowledge. Addressing these concerns requires the development of mechanisms that allow agents to selectively forget previously learned knowledge, giving rise to a new term LLM-based agent unlearning. This paper initiates research on unlearning in LLM-based agents. Specifically, we propose a novel and comprehensive framework that categorizes unlearning scenarios into three contexts: state unlearning (forgetting specific states or items), trajectory unlearning (forgetting sequences of actions) and environment unlearning (forgetting entire environments or categories of tasks). Within this framework, we introduce a natural language-based unlearning method that trains a conversion model to transform high-level unlearning requests into actionable unlearning prompts, guiding agents through a controlled forgetting process. Moreover, to evaluate the robustness of the proposed framework, we introduce an unlearning inference adversary capable of crafting prompts, querying agents, and observing their behaviors in an attempt to infer the forgotten knowledge. Experimental results show that our approach effectively enables agents to forget targeted knowledge while preserving performance on untargeted tasks, and prevents the adversary from inferring the forgotten knowledge.