🤖 AI Summary
Large language models (LLMs) exhibit limited dynamic adaptability in continuous decision-making and long-horizon, multi-task settings. Method: This paper proposes a self-evolving agent framework that integrates iterative closed-loop feedback, reflective reasoning, and Ebbinghaus forgetting-curve–informed memory optimization to construct a cognition-driven, memory-augmented architecture. This enables dynamic experience filtering, reinforcement, and long-term retention—moving beyond static prompting or fixed memory pools—and supports autonomous agent evolution over ultra-long contexts. Contribution/Results: Experiments demonstrate significant improvements in multi-task coordination stability, decision consistency, and task completion rates. The framework establishes a novel paradigm for developing intelligent agents with human-like continual learning capabilities, advancing the state of adaptive, memory-aware LLM-based agents.
📝 Abstract
Large language models (LLMs) have made significant advances in the field of natural language processing, but they still face challenges such as continuous decision-making. In this research, we propose a novel framework by integrating iterative feedback, reflective mechanisms, and a memory optimization mechanism based on the Ebbinghaus forgetting curve, it significantly enhances the agents' capabilities in handling multi-tasking and long-span information.