Thinking on the Fly: Test-Time Reasoning Enhancement via Latent Thought Policy Optimization

📅 2025-10-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit insufficient implicit reasoning robustness on out-of-distribution complex reasoning tasks. To address this, we propose Latent Thought Policy Optimization (LTPO), a parameter-free, test-time-only reasoning enhancement framework. LTPO models intermediate-layer latent thought vectors as optimizable dynamic states and employs the model’s own output distribution to compute confidence-based intrinsic rewards, which are optimized via instance-level online policy gradients. This is the first method enabling implicit-space reasoning path control without text generation or external supervision. Evaluated on five reasoning benchmarks, LTPO matches or surpasses strong baselines; notably, on the highly challenging AIME dataset, it substantially mitigates the “near-zero failure” phenomenon—where models produce near-random outputs—demonstrating a significant improvement in robustness for complex reasoning.

Technology Category

Application Category

📝 Abstract
Recent advancements in Large Language Models (LLMs) have shifted from explicit Chain-of-Thought (CoT) reasoning to more efficient latent reasoning, where intermediate thoughts are represented as vectors rather than text. However, latent reasoning can be brittle on challenging, out-of-distribution tasks where robust reasoning is most critical. To overcome these limitations, we introduce Latent Thought Policy Optimization (LTPO), a parameter-free framework that enhances LLM reasoning entirely at test time, without requiring model parameter updates. LTPO treats intermediate latent "thought" vectors as dynamic parameters that are actively optimized for each problem instance. It employs an online policy gradient method guided by an intrinsic, confidence-based reward signal computed directly from the frozen LLM's own output distributions, eliminating the need for external supervision or expensive text generation during optimization. Extensive experiments on five reasoning benchmarks show that LTPO not only matches or surpasses strong baselines on standard tasks but also demonstrates remarkable robustness where others fail. Most notably, on highly challenging AIME benchmarks where existing latent reasoning baselines collapse to near-zero accuracy, LTPO delivers substantial improvements, showcasing a unique capability for complex reasoning.
Problem

Research questions and friction points this paper is trying to address.

Enhancing latent reasoning robustness for out-of-distribution tasks
Optimizing intermediate thought vectors without parameter updates
Improving LLM reasoning using intrinsic confidence-based rewards
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimizes latent thought vectors dynamically per instance
Uses policy gradient with intrinsic confidence-based reward
Enhances reasoning without model updates or supervision
🔎 Similar Papers
No similar papers found.