🤖 AI Summary
This work addresses the high cost, poor scalability, and diminishing effectiveness of human-supervised approaches for improving large language models, especially as model capabilities approach human-level performance. To overcome these limitations, the paper proposes a closed-loop self-improvement framework that structures the self-enhancement process into four tightly coupled stages: data acquisition, selection, model optimization, and inference refinement. A key innovation is the introduction of an autonomous evaluation layer that coordinates and guides transitions across these stages. This framework offers the first systematic, lifecycle-oriented modeling of self-improvement, unifying critical components such as self-generated data, automated evaluation, iterative training, and inference-time optimization. By comprehensively mapping existing technical pathways and their limitations, the study lays the groundwork for realizing fully autonomous, self-evolving language models.
📝 Abstract
As large language models (LLMs) continue to advance, improving them solely through human supervision is becoming increasingly costly and limited in scalability. As models approach human-level capabilities in certain domains, human feedback may no longer provide sufficiently informative signals for further improvement. At the same time, the growing ability of models to make autonomous decisions and execute complex actions naturally enables abstractions in which components of the model development process can be progressively automated. Together, these challenges and opportunities have driven increasing interest in self-improvement, where models autonomously generate data, evaluate outputs, and iteratively refine their own capabilities. In this paper, we present a system-level perspective on self-improving language models and introduce a unified framework that organizes existing techniques. We conceptualize the self-improvement system as a closed-loop lifecycle, consisting of four tightly coupled processes: data acquisition, data selection, model optimization, and inference refinement, along with an autonomous evaluation layer. Within this framework, the model itself plays a central role in driving each stage: collecting or generating data, selecting informative signals, updating its parameters, and refining outputs, while the autonomous evaluation layer continuously monitors progress and guides the improvement cycle across stages. Following this lifecycle perspective, we systematically review and analyze representative methods for each component from a technical standpoint. We further discuss current limitations and outline our vision for future research toward fully self-improving LLMs.