SwiReasoning: Switch-Thinking in Latent and Explicit for Pareto-Superior Reasoning LLMs

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
LLMs face two key challenges in implicit reasoning: excessive implicit reasoning paths disperse probability mass and hinder convergence; and overthinking persists even without explicit textual cues, reducing token efficiency. This paper proposes SwiReasoning—a training-free, dynamic reasoning framework that adaptively switches between implicit and explicit reasoning via entropy-trend-driven, block-level confidence estimation, while imposing a budget constraint on the number of reasoning blocks to suppress redundancy. Its core innovation lies in the synergistic optimization of exploration–convergence trade-offs through dynamic mode switching and budget-aware block limiting. Evaluated across multiple mathematical and STEM benchmarks, SwiReasoning achieves average accuracy gains of 1.5%–2.8% across model scales and improves token efficiency by 56%–79% under tight budget constraints—with greater improvements under stricter budgets—delivering Pareto-optimal enhancements in both accuracy and efficiency.

Technology Category

Application Category

📝 Abstract
Recent work shows that, beyond discrete reasoning through explicit chain-of-thought steps, which are limited by the boundaries of natural languages, large language models (LLMs) can also reason continuously in latent space, allowing richer information per step and thereby improving token efficiency. Despite this promise, latent reasoning still faces two challenges, especially in training-free settings: 1) purely latent reasoning broadens the search distribution by maintaining multiple implicit paths, which diffuses probability mass, introduces noise, and impedes convergence to a single high-confidence solution, thereby hurting accuracy; and 2) overthinking persists even without explicit text, wasting tokens and degrading efficiency. To address these issues, we introduce SwiReasoning, a training-free framework for LLM reasoning which features two key innovations: 1) SwiReasoning dynamically switches between explicit and latent reasoning, guided by block-wise confidence estimated from entropy trends in next-token distributions, to balance exploration and exploitation and promote timely convergence. 2) By limiting the maximum number of thinking-block switches, SwiReasoning curbs overthinking and improves token efficiency across varying problem difficulties. On widely used mathematics and STEM benchmarks, SwiReasoning consistently improves average accuracy by 1.5%-2.8% across reasoning LLMs of different model families and scales. Furthermore, under constrained budgets, SwiReasoning improves average token efficiency by 56%-79%, with larger gains as budgets tighten.
Problem

Research questions and friction points this paper is trying to address.

Addresses challenges in latent reasoning for large language models
Improves accuracy by balancing explicit and latent reasoning dynamically
Enhances token efficiency by preventing overthinking in reasoning processes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Switches dynamically between explicit and latent reasoning
Uses entropy trends to guide reasoning mode selection
Limits thinking-block switches to curb overthinking
🔎 Similar Papers
No similar papers found.
D
Dachuan Shi
Georgia Tech
A
Abedelkadir Asi
Microsoft
K
Keying Li
Microsoft
Xiangchi Yuan
Xiangchi Yuan
Georgia Institute of Technology
Representation Learning
L
Leyan Pan
Georgia Tech
W
Wenke Lee
Georgia Tech
W
Wen Xiao
Microsoft