RL of Thoughts: Navigating LLM Reasoning with Inference-time Reinforcement Learning

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) are constrained by fixed autoregressive decoding, limiting their adaptability to diverse and complex reasoning tasks. To address this, we propose a lightweight, inference-time reinforcement learning navigator that dynamically constructs task-aware logical structures. Our method introduces a novel learnable “logical block” composition mechanism—requiring only 3K parameters—to enable sub-10B LLMs to achieve reasoning performance comparable to 100B-scale models. The navigator is optimized via PPO and integrates five human-cognition-inspired logical blocks, enabling on-the-fly structural reasoning without any LLM fine-tuning. Evaluated on rigorous benchmarks—including AIME, MATH, and GPQA—our approach achieves up to a 13.4% absolute improvement. It demonstrates strong cross-model generalization (across GPT, Llama, Qwen, and DeepSeek) and cross-task robustness. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Despite rapid advancements in large language models (LLMs), the token-level autoregressive nature constrains their complex reasoning capabilities. To enhance LLM reasoning, inference-time techniques, including Chain/Tree/Graph-of-Thought(s), successfully improve the performance, as they are fairly cost-effective by guiding reasoning through sophisticated logical structures without modifying LLMs' parameters. However, these manually predefined, task-agnostic frameworks are applied uniformly across diverse tasks, lacking adaptability. To improve this, we propose RL-of-Thoughts (RLoT), where we train a lightweight navigator model with reinforcement learning (RL) to adaptively enhance LLM reasoning at inference time. Specifically, we design five basic logic blocks from the perspective of human cognition. During the reasoning process, the trained RL navigator dynamically selects the suitable logic blocks and combines them into task-specific logical structures according to problem characteristics. Experiments across multiple reasoning benchmarks (AIME, MATH, GPQA, etc.) with multiple LLMs (GPT, Llama, Qwen, and DeepSeek) illustrate that RLoT outperforms established inference-time techniques by up to 13.4%. Remarkably, with less than 3K parameters, our RL navigator is able to make sub-10B LLMs comparable to 100B-scale counterparts. Moreover, the RL navigator demonstrates strong transferability: a model trained on one specific LLM-task pair can effectively generalize to unseen LLMs and tasks. Our code is open-source at https://anonymous.4open.science/r/RL-LLM-Reasoning-1A30 for reproducibility.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM reasoning with adaptive inference-time techniques
Overcoming task-agnostic limitations in predefined reasoning frameworks
Improving reasoning efficiency across diverse tasks and LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses reinforcement learning for adaptive reasoning
Dynamically combines logic blocks for tasks
Lightweight navigator enhances LLM performance
🔎 Similar Papers
No similar papers found.
Qianyue Hao
Qianyue Hao
PhD Student, Department of Electronic Engineering, Tsinghua University
Reinforcement LearningLarge Language Models
S
Sibo Li
Department of Electronic Engineering, BNRist, Tsinghua University
J
Jian Yuan
Department of Electronic Engineering, BNRist, Tsinghua University
Y
Yong Li
Department of Electronic Engineering, BNRist, Tsinghua University