🤖 AI Summary
Small-to-medium-scale language models (<100B parameters) suffer from low sample efficiency and high verification overhead when tackling real-world software engineering tasks—e.g., fixing GitHub Issues in SWE-Bench.
Method: We propose EvoScale, the first framework to model code generation as a self-evolving process. Leveraging reinforcement learning, EvoScale enables models to autonomously refine their output distributions at test time without external validators. It integrates evolutionary algorithms, self-verifying generation, and test-time scaling.
Contribution/Results: On SWE-Bench-Verified, EvoScale elevates the performance of the 32B model Satori-SWE to match or surpass that of 100B+ models—using only a minimal number of inference samples. Crucially, it eliminates the need for costly external validation and drastically reduces sampling and scoring overhead. All code, data, and models are publicly released.
📝 Abstract
Language models (LMs) perform well on standardized coding benchmarks but struggle with real-world software engineering tasks such as resolving GitHub issues in SWE-Bench, especially when model parameters are less than 100B. While smaller models are preferable in practice due to their lower computational cost, improving their performance remains challenging. Existing approaches primarily rely on supervised fine-tuning (SFT) with high-quality data, which is expensive to curate at scale. An alternative is test-time scaling: generating multiple outputs, scoring them using a verifier, and selecting the best one. Although effective, this strategy often requires excessive sampling and costly scoring, limiting its practical application. We propose Evolutionary Test-Time Scaling (EvoScale), a sample-efficient method that treats generation as an evolutionary process. By iteratively refining outputs via selection and mutation, EvoScale shifts the output distribution toward higher-scoring regions, reducing the number of samples needed to find correct solutions. To reduce the overhead from repeatedly sampling and selection, we train the model to self-evolve using reinforcement learning (RL). Rather than relying on external verifiers at inference time, the model learns to self-improve the scores of its own generations across iterations. Evaluated on SWE-Bench-Verified, EvoScale enables our 32B model, Satori-SWE-32B, to match or exceed the performance of models with over 100B parameters while using a few samples. Code, data, and models will be fully open-sourced.