🤖 AI Summary
Existing self-improvement mechanisms for large language models (LLMs) lack a rigorous theoretical foundation.
Method: We propose the “Solver-Verifier Gap” framework—the first analytically tractable theory of LLM self-improvement dynamics—by decoupling model capability into solver and verifier dimensions, modeling their co-evolution as a dynamical system, and predicting final performance gains from early-stage training data. Empirical validation spans multiple models and datasets.
Contributions: (1) We identify an intrinsic optimization drive: performance gains arise from the solver’s lag behind the verifier, creating persistent pressure for refinement. (2) We prove that injecting finite external data at any training stage does not alter the asymptotic performance upper bound, providing theoretical justification for flexible data-intervention strategies. (3) We establish the first predictive and interpretable theoretical paradigm for LLM self-improvement—grounded in formal analysis, empirically validated, and amenable to principled design.
📝 Abstract
Self-improvement is among the most prominent techniques within the realm of large language models (LLM), aiming to enhance the LLM performance without relying on external data. Despite its significance, generally how LLM performances evolve during the self-improvement process remains underexplored. In this paper, we theoretically model the training dynamics of self-improvement via the concept of solver-verifier gap. This is inspired by the conjecture that the performance enhancement of self-improvement stems from the gap between LLM's solver capability and verifier capability. Based on the theoretical framework, we further introduce how to predict the ultimate power of self-improvement using only information from the first few training epochs. We empirically validate the effectiveness of the theoretical model on various LLMs and datasets. Beyond self-improvement, we extend our analysis to investigate how external data influences these dynamics within the framework. Notably, we find that under limited external data regimes, such external data can be utilized at any stage without significantly affecting final performances, which accords with the empirical observations.