🤖 AI Summary
This work addresses catastrophic forgetting in large language model agents during continual learning—a manifestation of the stability-plasticity dilemma—by proposing the Agent-Dice framework. Agent-Dice explicitly decouples knowledge into shared and conflicting components, filtering out conflicting gradients via geometric consensus evaluation and reinforcing shared semantics through curvature-aware importance weighting. It further introduces a novel parameter fusion mechanism based on directional consistency to enable efficient gradient disentanglement. Theoretical analysis reveals the geometric underpinnings of the stability-plasticity trade-off. Empirical results demonstrate that Agent-Dice significantly enhances continual learning performance on tasks such as GUI interaction and tool use, while maintaining minimal computational overhead and parameter update costs.
📝 Abstract
Large Language Model (LLM)-based agents significantly extend the utility of LLMs by interacting with dynamic environments. However, enabling agents to continually learn new tasks without catastrophic forgetting remains a critical challenge, known as the stability-plasticity dilemma. In this work, we argue that this dilemma fundamentally arises from the failure to explicitly distinguish between common knowledge shared across tasks and conflicting knowledge introduced by task-specific interference. To address this, we propose Agent-Dice, a parameter fusion framework based on directional consensus evaluation. Concretely, Agent-Dice disentangles knowledge updates through a two-stage process: geometric consensus filtering to prune conflicting gradients, and curvature-based importance weighting to amplify shared semantics. We provide a rigorous theoretical analysis that establishes the validity of the proposed fusion scheme and offers insight into the origins of the stability-plasticity dilemma. Extensive experiments on GUI agents and tool-use agent domains demonstrate that Agent-Dice exhibits outstanding continual learning performance with minimal computational overhead and parameter updates. The codes are available at https://github.com/Wuzheng02/Agent-Dice.