SABER: Switchable and Balanced Training for Efficient LLM Reasoning

πŸ“… 2025-08-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
While chain-of-thought (CoT) reasoning in large language models (LLMs) improves accuracy on complex tasks, it incurs high computational cost and latency. Method: We propose SABER, a novel framework featuring user-controllable token budgeting and four swappable inference modes to dynamically allocate reasoning depth on demand. SABER integrates training with β€œno-thought” samples and reinforcement learning, incorporating system-level prompting, length-aware reward shaping, and multi-granularity budget partitioning for fine-grained, low-latency inference control. Contribution/Results: On benchmarks including MATH, SABER-FastThink reduces average reasoning length by 65.4% while improving accuracy by 3.6%, significantly outperforming existing adaptive inference methods. Moreover, SABER demonstrates strong generalization across domains and model scales.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) empowered by chain-of-thought reasoning have achieved impressive accuracy on complex tasks but suffer from excessive inference costs and latency when applied uniformly to all problems. We propose SABER (Switchable and Balanced Training for Efficient LLM Reasoning), a reinforcement learning framework that endows LLMs with user-controllable, token-budgeted reasoning. SABER first profiles each training example's base-model thinking token usage and assigns it to one of the predefined budget tiers. During fine-tuning, the model is guided by system prompts and length-aware rewards to respect its assigned budget. In parallel, we incorporate no-think examples to ensure the model remains reliable even when explicit reasoning is turned off. SABER further supports four discrete inference modes - NoThink, FastThink, CoreThink, and DeepThink, enabling flexible trade-offs between latency and reasoning depth. Extensive evaluations on math reasoning (MATH, GSM8K), code generation (MBPP), and logical reasoning (LiveBench-Reasoning) demonstrate that SABER achieves high accuracy under tight budgets, graceful degradation, and effective cross-scale and cross-domain generalization. In particular, SABER-FastThink cuts reasoning length by 65.4% and yields a 3.6% accuracy gain compared with the base model on the MATH benchmark.
Problem

Research questions and friction points this paper is trying to address.

Reducing LLM inference costs and latency
Enabling user-controllable token-budgeted reasoning
Balancing accuracy with computational efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning for token-budgeted reasoning
Length-aware rewards guide budget compliance
Four discrete modes enable latency-accuracy tradeoffs
K
Kai Zhao
Bilibili Inc.
Yanjun Zhao
Yanjun Zhao
UIUC
ml
Jiaming Song
Jiaming Song
Stanford University
Machine Learning
S
Shien He
Bilibili Inc.
L
Lusheng Zhang
Bilibili Inc.
Q
Qiang Zhang
Bilibili Inc.
T
Tianjiao Li
Bilibili Inc.