ECHO: Entropy-Confidence Hybrid Optimization for Test-Time Reinforcement Learning

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of rollout collapse caused by high-entropy branches and premature policy sharpening due to early pseudo-label noise in test-time reinforcement learning. To mitigate these issues, the authors propose a dynamic branching mechanism that jointly leverages entropy and confidence: during rollouts, branch width is adaptively controlled by combining local entropy and ensemble confidence, complemented by online confidence-based pruning; during policy updates, confidence-aware clipping and an entropy-confidence hybrid advantage shaping are employed. This approach significantly enhances exploration efficiency and training robustness, yielding consistent performance gains across multiple mathematical and visual reasoning benchmarks while demonstrating superior generalization under limited rollout budgets.

Technology Category

Application Category

📝 Abstract
Test-time reinforcement learning generates multiple candidate answers via repeated rollouts and performs online updates using pseudo-labels constructed by majority voting. To reduce overhead and improve exploration, prior work introduces tree structured rollouts, which share reasoning prefixes and branch at key nodes to improve sampling efficiency. However, this paradigm still faces two challenges: (1) high entropy branching can trigger rollout collapse, where the branching budget concentrates on a few trajectories with consecutive high-entropy segments, rapidly reducing the number of effective branches; (2) early pseudo-labels are noisy and biased, which can induce self-reinforcing overfitting, causing the policy to sharpen prematurely and suppress exploration. To address these issues, we propose Entropy Confidence Hybrid Group Relative Policy Optimization (ECHO). During rollout, ECHO jointly leverages local entropy and group level confidence to adaptively control branch width, and further introduces online confidence-based pruning to terminate persistently low confidence branches, avoiding high entropy traps and mitigating collapse. During policy updates, ECHO employs confidence adaptive clipping and an entropy confidence hybrid advantage shaping approach to enhance training robustness and mitigate early stage bias. Experiments demonstrate that ECHO achieves consistent gains on multiple mathematical and visual reasoning benchmarks, and generalizes more effectively under a limited rollout budget.
Problem

Research questions and friction points this paper is trying to address.

test-time reinforcement learning
rollout collapse
high entropy
noisy pseudo-labels
premature policy sharpening
Innovation

Methods, ideas, or system contributions that make the work stand out.

test-time reinforcement learning
entropy-confidence hybrid optimization
adaptive branching
confidence-based pruning
advantage shaping
🔎 Similar Papers
No similar papers found.