🤖 AI Summary
To address the limitations of large language models (LLMs) in long-term memory retention, constrained context windows, and sustained decision-making under dynamic environments, this paper proposes a tri-agent collaborative framework—comprising User, Assistant, and Verifier agents—that integrates reflective reasoning, iterative feedback-driven policy optimization, and forgetting-aware memory management. Crucially, it introduces, for the first time, the Ebbinghaus forgetting curve into LLM memory retrieval and update mechanisms. We further design a reflective self-improvement mechanism that unifies multi-task coordination and long-horizon information preservation. Experiments on multi-turn dialogue and complex planning benchmarks demonstrate significant improvements: task completion rate and cross-turn consistency increase by 12.6% over strong baselines, validating the framework’s effectiveness in sustaining coherent, adaptive behavior over extended interactions.
📝 Abstract
Large language models (LLMs) have made significant advances in the field of natural language processing, but they still face challenges such as continuous decision-making, lack of long-term memory, and limited context windows in dynamic environments. To address these issues, this paper proposes an innovative framework Memory-Enhanced Agents with Reflective Self-improvement. The MARS framework comprises three agents: the User, the Assistant, and the Checker. By integrating iterative feedback, reflective mechanisms, and a memory optimization mechanism based on the Ebbinghaus forgetting curve, it significantly enhances the agents capabilities in handling multi-tasking and long-span information.