Multi-objective Evolutionary Merging Enables Efficient Reasoning Models

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of significantly compressing chain-of-thought reasoning length while maintaining or even improving inference accuracy to reduce computational overhead. It introduces, for the first time, a multi-objective evolutionary algorithm into model merging, formulating long-to-short (L2S) reasoning as a Pareto optimization problem balancing accuracy and output length—thereby avoiding the brittleness of conventional scalarization approaches. The authors propose an entropy-based subset sampling technique to enhance search efficiency in large-scale models and construct a robust ensemble of models lying on the Pareto front. Evaluated across six mathematical reasoning benchmarks, the method achieves over 50% compression in reasoning trajectory length for models ranging from 1.5B to 14B parameters, without compromising—and often improving—solution accuracy.
📝 Abstract
Reasoning models have demonstrated remarkable capabilities in solving complex problems by leveraging long chains of thought. However, this more deliberate reasoning comes with substantial computational overhead at inference time. The Long-to-Short (L2S) reasoning problem seeks to maintain high accuracy using fewer tokens, but current training-free model merging approaches rely on scalarized, fixed-hyperparameter arithmetic methods that are highly brittle and force suboptimal compromises. To address this gap, we introduce Evo-L2S, a novel framework that formulates L2S reasoning as a multi-objective optimization challenge. By leveraging evolutionary model merging, Evo-L2S explicitly optimizes the trade-off between accuracy and output length to produce a robust Pareto front of merged models. To make this search computationally tractable for large language models, we propose an entropy-based subset sampling technique that drastically reduces the overhead of fitness estimation. Comprehensive experiments across 1.5B, 7B, and 14B parameter scales on six mathematical reasoning benchmarks demonstrate that Evo-L2S can reduce the length of generated reasoning traces by over 50% while preserving, or even improving, the problem-solving accuracy of the original reasoning models.
Problem

Research questions and friction points this paper is trying to address.

reasoning models
Long-to-Short reasoning
model merging
accuracy-length trade-off
computational overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-objective optimization
evolutionary model merging
reasoning compression
entropy-based sampling
Pareto front
🔎 Similar Papers
No similar papers found.