Adaptive Dual Reasoner: Large Reasoning Models Can Think Efficiently by Hybrid Reasoning

📅 2025-10-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large reasoning models (LRMs) often suffer from excessive computation and high latency due to overthinking. To address this, we propose an adaptive dual-inference mechanism that implements a context-aware fast/slow dual-mode architecture, coupled with an entropy-guided hybrid strategy optimization framework for dynamic trade-offs between reasoning quality and efficiency. Methodologically, our approach employs a two-stage training paradigm—supervised fine-tuning followed by reinforcement learning—integrating mixed-inference data construction, entropy-driven dynamic token expansion, and difficulty-aware penalization. Evaluated on mathematical reasoning benchmarks, our method achieves up to a 6.1% accuracy gain while reducing average inference length by 49.5%–59.3%, significantly outperforming existing methods. The core contribution lies in the first integration of entropy-aware control and adaptive dual-path switching into LRM inference, jointly enhancing accuracy, efficiency, and interpretability.

Technology Category

Application Category

📝 Abstract
Although Long Reasoning Models (LRMs) have achieved superior performance on various reasoning scenarios, they often suffer from increased computational costs and inference latency caused by overthinking. To address these limitations, we propose Adaptive Dual Reasoner, which supports two reasoning modes: fast thinking and slow thinking. ADR dynamically alternates between these modes based on the contextual complexity during reasoning. ADR is trained in two stages: (1) A cold-start stage using supervised fine-tuning (SFT) to equip the model with the ability to integrate both fast and slow reasoning modes, in which we construct a hybrid reasoning dataset through a dedicated pipeline to provide large-scale supervision. (2) A reinforcement learning stage for optimizing reasoning effort, where we introduce Entropy-guided Hybrid Policy Optimization EHPO, an RL training framework employing an entropy-guided dynamic rollout strategy for branching at high-entropy units and a difficulty-aware penalty to balance fast and slow reasoning. Across challenging mathematical reasoning benchmarks, ADR achieves an effective balance between reasoning performance and efficiency among state-of-the-art approaches. Specifically, ADR yields a performance gain of up to 6.1%, while reducing the reasoning output length by 49.5% to 59.3%.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational costs in long reasoning models
Balancing fast and slow thinking modes dynamically
Optimizing reasoning performance while minimizing output length
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid fast and slow thinking modes
Dynamic mode switching based on complexity
Two-stage training with SFT and RL
🔎 Similar Papers
No similar papers found.
Y
Yujian Zhang
Tencent Youtu Lab
K
Keyu Chen
Tencent Youtu Lab
Z
Zhifeng Shen
Tencent Youtu Lab
Ruizhi Qiao
Ruizhi Qiao
Tencent Youtu Lab
Artificial intelligence
Xing Sun
Xing Sun
Tencent Youtu Lab
LLMMLLMAgent