Interleaved Reasoning for Large Language Models via Reinforcement Learning

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high first-token latency and low inference efficiency caused by lengthy chain-of-thought (CoT) reasoning, this paper proposes Interleaved Reasoning—a reinforcement learning–based paradigm that dynamically interleaves thinking and answering steps in multi-hop question answering. We are the first to empirically identify and leverage the inherent interleaved reasoning capability of large language models (LLMs), designing a lightweight, tool-free conditional intermediate signal reward mechanism. This mechanism, combined with PPO, GRPO, and REINFORCE++ optimization strategies, enables precise steering and strong generalization over reasoning paths. Experiments demonstrate an average reduction of over 80% in first-token latency and up to a 19.3 percentage-point improvement in Pass@1 accuracy. Our approach significantly outperforms baselines on challenging reasoning benchmarks—including MATH, GPQA, and MMLU—establishing new state-of-the-art performance while maintaining computational efficiency.

Technology Category

Application Category

📝 Abstract
Long chain-of-thought (CoT) significantly enhances large language models' (LLM) reasoning capabilities. However, the extensive reasoning traces lead to inefficiencies and an increased time-to-first-token (TTFT). We propose a novel training paradigm that uses reinforcement learning (RL) to guide reasoning LLMs to interleave thinking and answering for multi-hop questions. We observe that models inherently possess the ability to perform interleaved reasoning, which can be further enhanced through RL. We introduce a simple yet effective rule-based reward to incentivize correct intermediate steps, which guides the policy model toward correct reasoning paths by leveraging intermediate signals generated during interleaved reasoning. Extensive experiments conducted across five diverse datasets and three RL algorithms (PPO, GRPO, and REINFORCE++) demonstrate consistent improvements over traditional think-answer reasoning, without requiring external tools. Specifically, our approach reduces TTFT by over 80% on average and improves up to 19.3% in Pass@1 accuracy. Furthermore, our method, trained solely on question answering and logical reasoning datasets, exhibits strong generalization ability to complex reasoning datasets such as MATH, GPQA, and MMLU. Additionally, we conduct in-depth analysis to reveal several valuable insights into conditional reward modeling.
Problem

Research questions and friction points this paper is trying to address.

Enhance LLM reasoning efficiency via interleaved thinking and answering
Reduce time-to-first-token (TTFT) in multi-hop question reasoning
Improve accuracy and generalization in complex reasoning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning enhances interleaved reasoning
Rule-based reward incentivizes correct intermediate steps
Reduces time-to-first-token by over 80%