🤖 AI Summary
Current large language models (LLMs) lack dynamic parameter updating capabilities, hindering adaptation to novel tasks, evolving knowledge, and real-time interaction in open-world environments—thus impeding their progression toward autonomous agents. To address this, we propose the “self-evolving agent” paradigm, introducing the first unified framework for dynamic adaptation along three dimensions: *what* evolves (evolutionary content), *when* it evolves (timing), and *how* it evolves (mechanism). We establish the first systematic taxonomy covering component-level evolution, adaptation stages, and driving mechanisms. Our approach integrates continual learning, test-time adaptation, reinforcement learning with reward shaping, natural-language feedback, and multi-agent coordination to enable real-time model evolution. We further design a unified evaluation framework, specifying application domains—including code generation, education, and healthcare—and critical challenges such as safety, scalability, and robustness. This work lays a theoretical foundation and provides a concrete technical pathway toward artificial superintelligence.
📝 Abstract
Large Language Models (LLMs) have demonstrated strong capabilities but remain fundamentally static, unable to adapt their internal parameters to novel tasks, evolving knowledge domains, or dynamic interaction contexts. As LLMs are increasingly deployed in open-ended, interactive environments, this static nature has become a critical bottleneck, necessitating agents that can adaptively reason, act, and evolve in real time. This paradigm shift -- from scaling static models to developing self-evolving agents -- has sparked growing interest in architectures and methods enabling continual learning and adaptation from data, interactions, and experiences. This survey provides the first systematic and comprehensive review of self-evolving agents, organized around three foundational dimensions -- what to evolve, when to evolve, and how to evolve. We examine evolutionary mechanisms across agent components (e.g., models, memory, tools, architecture), categorize adaptation methods by stages (e.g., intra-test-time, inter-test-time), and analyze the algorithmic and architectural designs that guide evolutionary adaptation (e.g., scalar rewards, textual feedback, single-agent and multi-agent systems). Additionally, we analyze evaluation metrics and benchmarks tailored for self-evolving agents, highlight applications in domains such as coding, education, and healthcare, and identify critical challenges and research directions in safety, scalability, and co-evolutionary dynamics. By providing a structured framework for understanding and designing self-evolving agents, this survey establishes a roadmap for advancing adaptive agentic systems in both research and real-world deployments, ultimately shedding lights to pave the way for the realization of Artificial Super Intelligence (ASI), where agents evolve autonomously, performing at or beyond human-level intelligence across a wide array of tasks.