Learning to Self-Evolve

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models lack explicit training during inference to autonomously refine their contextual representations, limiting their ability to leverage feedback for improved performance on novel tasks. This work proposes a reinforcement learning framework that, for the first time, formulates self-evolution as a learnable skill. By introducing a tree-guided context editing mechanism, the approach enables multi-step self-evolution driven by single-step rewards at test time. This paradigm shifts beyond reliance on a model’s intrinsic reasoning capabilities and demonstrates transferable guidance across different architectures. Empirical results show that a 4B-parameter model equipped with this method surpasses advanced self-evolution strategies—including those of GPT-5 and Claude Sonnet 4.5—as well as prompt optimization techniques such as GEPA and TextGrad on challenging benchmarks like Text-to-SQL (BIRD) and question answering (MMLU-Redux).

Technology Category

Application Category

📝 Abstract
We introduce Learning to Self-Evolve (LSE), a reinforcement learning framework that trains large language models (LLMs) to improve their own contexts at test time. We situate LSE in the setting of test-time self-evolution, where a model iteratively refines its context from feedback on seen problems to perform better on new ones. Existing approaches rely entirely on the inherent reasoning ability of the model and never explicitly train it for this task. LSE reduces the multi-step evolution problem to a single-step RL objective, where each context edit is rewarded by the improvement in downstream performance. We pair this objective with a tree-guided evolution loop. On Text-to-SQL generation (BIRD) and general question answering (MMLU-Redux), a 4B-parameter model trained with LSE outperforms self-evolving policies powered by GPT-5 and Claude Sonnet 4.5, as well as prompt optimization methods including GEPA and TextGrad, and transfers to guide other models without additional training. Our results highlight the effectiveness of treating self-evolution as a learnable skill.
Problem

Research questions and friction points this paper is trying to address.

self-evolution
test-time adaptation
context refinement
reinforcement learning
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning to Self-Evolve
test-time self-evolution
reinforcement learning
context editing
prompt optimization