Apriel-Reasoner: RL Post-Training for General-Purpose and Efficient Reasoning

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of multi-domain reasoning in reinforcement learning—namely, significant domain disparities, high computational costs from long reasoning trajectories, and uneven sample efficiency—by proposing a reproducible post-training framework based on the 15B-parameter Apriel-Base model. The approach integrates adaptive domain sampling to preserve target-domain proportions and a zero-overhead difficulty-aware length penalty that encourages longer reasoning for hard problems and shorter responses for easy ones. Jointly optimizing across five domains—mathematics, code generation, instruction following, logic puzzles, and function calling—under a 16K-token output budget, the method outperforms baselines on AIME 2025, GPQA, MMLU-Pro, and LiveCodeBench, achieving 30–50% shorter reasoning trajectories and approaching the Pareto frontier between accuracy and reasoning length at reduced token cost.
📝 Abstract
Building general-purpose reasoning models using reinforcement learning with verifiable rewards (RLVR) across diverse domains has been widely adopted by frontier open-weight models. However, their training recipes and domain mixtures are often not disclosed. Joint optimization across domains poses significant challenges: domains vary widely in rollout length, problem difficulty and sample efficiency. Further, models with long chain-of-thought traces increase inference cost and latency, making efficiency critical for practical deployment. We present Apriel-Reasoner, trained with a fully reproducible multi-domain RL post-training recipe on Apriel-Base, a 15B-parameter open-weight LLM, across five domains using public datasets: mathematics, code generation, instruction following, logical puzzles and function calling. We introduce an adaptive domain sampling mechanism that preserves target domain ratios despite heterogeneous rollout dynamics, and a difficulty-aware extension of the standard length penalty that, with no additional training overhead, encourages longer reasoning for difficult problems and shorter traces for easy ones. Trained with a strict 16K-token output budget, Apriel-Reasoner generalizes to 32K tokens at inference and improves over Apriel-Base on AIME 2025, GPQA, MMLU-Pro, and LiveCodeBench while producing 30-50% shorter reasoning traces. It matches strong open-weight models of similar size at lower token cost, thereby pushing the Pareto frontier of accuracy versus token budget.
Problem

Research questions and friction points this paper is trying to address.

reinforcement learning
multi-domain reasoning
reasoning efficiency
token budget
general-purpose reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

reinforcement learning with verifiable rewards
adaptive domain sampling
difficulty-aware length penalty
efficient reasoning
multi-domain post-training
🔎 Similar Papers
No similar papers found.