🤖 AI Summary
Large language models (LLMs) suffer from repetitive reasoning patterns and entrapment in local optima during multi-step reasoning, primarily due to sparse extrinsic rewards and insufficient exploration.
Method: This paper proposes an intrinsic motivation–based exploration enhancement mechanism. Its core innovation is a lightweight “Coin Flipping Network” that dynamically generates intrinsic rewards by jointly estimating pseudo-counts and modeling cognitive uncertainty—thereby balancing novelty-driven exploration with task-oriented learning. Integrated into a reinforcement learning framework, the method explicitly tracks the exploration state of reasoning trajectories using policy optimization algorithms such as GRPO.
Contribution/Results: Experiments demonstrate substantial improvements on complex reasoning benchmarks: the approach yields higher-quality, more diverse chain-of-thought (CoT) generations, consistently escapes suboptimal solutions, and enhances both robustness and generalization in LLM-based reasoning—offering a principled pathway toward more reliable and adaptive reasoning systems.
📝 Abstract
Reinforcement Learning (RL) has become a compelling way to strengthen the multi step reasoning ability of Large Language Models (LLMs). However, prevalent RL paradigms still lean on sparse outcome-based rewards and limited exploration, which often drives LLMs toward repetitive and suboptimal reasoning patterns. In this paper, we study the central question of how to design exploration for LLM reasoning and introduce MERCI (Motivating Exploration in LLM Reasoning with Count-based Intrinsic Rewards), a novel RL algorithm that augments policy optimization with a principled intrinsic reward. Building on the idea of count-based exploration, MERCI leverages a lightweight Coin Flipping Network (CFN) to estimate the pseudo count and further epistemic uncertainty over reasoning trajectories, and converts them into an intrinsic reward that values novelty while preserving the learning signal from task rewards. We integrate MERCI into some advanced RL frameworks like Group Relative Policy Optimization (GRPO). Experiments on complex reasoning benchmarks demonstrate that MERCI encourages richer and more varied chains of thought, significantly improves performance over strong baselines, and helps the policy escape local routines to discover better solutions. It indicates that our targeted intrinsic motivation can make exploration reliable for language model reasoning.