No Free Lunch: Rethinking Internal Feedback for LLM Reasoning

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of enhancing large language models’ (LLMs) reasoning capabilities without external supervision. We propose Reinforcement Learning with Intrinsic Feedback (RLIF), a novel paradigm that leverages computable internal signals—such as token/trajectory entropy and self-certainty—as unsupervised reward proxies, augmented by weight interpolation analysis to diagnose model degradation. Theoretically, we establish partial equivalences among diverse intrinsic objectives for the first time. Empirically, RLIF substantially improves mathematical reasoning at the base-model stage—matching or even surpassing supervised RLVR—yet its gains vanish sharply after instruction tuning; intrinsic feedback yields negligible improvement on already-aligned models. This study provides critical empirical evidence and practical guidance on the effectiveness boundaries of intrinsic signals in LLM post-training.

Technology Category

Application Category

📝 Abstract
Reinforcement learning has emerged as a powerful paradigm for post-training large language models (LLMs) to improve reasoning. Approaches like Reinforcement Learning from Human Feedback (RLHF) and Reinforcement Learning with Verifiable Rewards (RLVR) have shown strong results, but they require extensive external supervision. We investigate an alternative class of methods, Reinforcement Learning from Internal Feedback (RLIF), which relies solely on intrinsic model-derived signals instead of external rewards. In particular, we leverage unsupervised reward proxies such as token-level entropy, trajectory-level entropy, and self-certainty. Our theoretical analysis shows these internal objectives are partially equivalent, and we empirically evaluate various RLIF strategies on challenging math reasoning benchmarks. Experimental results demonstrate that RLIF can boost the reasoning performance of base LLMs at the beginning phase of the training, matching or surpassing RLVR techniques on these tasks. However, when training progresses, performance degrades even below the model before training. Moreover, we find that RLIF yields little improvement for instruction-tuned models, indicating diminishing returns of intrinsic feedback once an LLM is already instruction-tuned. We further analyze this limitation by mixing model weights and explain the reason of RLIF's training behaviors, providing practical guidelines for integrating internal feedback signals into LLM training. We hope our analysis of internal feedback will inform more principled and effective strategies for LLM post-training.
Problem

Research questions and friction points this paper is trying to address.

Exploring internal feedback for LLM reasoning without external supervision
Evaluating unsupervised reward proxies in Reinforcement Learning from Internal Feedback
Analyzing limitations of internal feedback for instruction-tuned LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning from Internal Feedback (RLIF)
Unsupervised reward proxies like token entropy
Intrinsic model-derived signals replace external rewards
🔎 Similar Papers
No similar papers found.
Y
Yanzhi Zhang
Zhongguancun Academy, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, University of Chinese Academy of Sciences
Z
Zhaoxi Zhang
Peking University, Zhongguancun Institute of Artificial Intelligence
H
Haoxiang Guan
Zhongguancun Academy
Y
Yilin Cheng
Zhongguancun Academy
Yitong Duan
Yitong Duan
Institute for Interdisciplinary Information Sciences, Tsinghua University
Machine Learning
C
Chen Wang
Zhongguancun Academy
Y
Yue Wang
Zhongguancun Academy, Zhongguancun Institute of Artificial Intelligence
Shuxin Zheng
Shuxin Zheng
Deputy Director, Zhongguancun Institute of Artificial Intelligence
General AIGenerative AI
Jiyan He
Jiyan He
University of Science and Technology of China
Machine LearningAI for Science