What Makes Reasoning Invalid: Echo Reflection Mitigation for Large Language Models

📅 2025-11-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies and names a novel failure mode—“echo reflection”—in large language models (LLMs) during complex domain reasoning, wherein models mechanically reproduce earlier reasoning steps during reflection without incorporating new information or achieving deeper insight. To address this, we propose the Adaptive Entropy Policy Optimization (AEPO) framework, introducing (i) a reflection-aware information filtering mechanism and (ii) a cognitive-entropy-driven dynamic exploration strategy to suppress erroneous cognitive propagation and stimulate deep knowledge retrieval. AEPO is grounded in reinforcement learning and integrates quantified information flow modeling, a learnable filtering module, and an adaptive exploration–exploitation trade-off mechanism. Evaluated on multiple challenging reasoning benchmarks, AEPO significantly outperforms state-of-the-art RL-based methods, achieves new SOTA performance, and—crucially—systematically mitigates echo reflection for the first time, thereby substantially enhancing LLMs’ reflection quality and reasoning depth.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable performance across a wide range of reasoning tasks. Recent methods have further improved LLM performance in complex mathematical reasoning. However, when extending these methods beyond the domain of mathematical reasoning to tasks involving complex domain-specific knowledge, we observe a consistent failure of LLMs to generate novel insights during the reflection stage. Instead of conducting genuine cognitive refinement, the model tends to mechanically reiterate earlier reasoning steps without introducing new information or perspectives, a phenomenon referred to as"Echo Reflection". We attribute this behavior to two key defects: (1) Uncontrollable information flow during response generation, which allows premature intermediate thoughts to propagate unchecked and distort final decisions; (2) Insufficient exploration of internal knowledge during reflection, leading to repeating earlier findings rather than generating new cognitive insights. Building on these findings, we proposed a novel reinforcement learning method termed Adaptive Entropy Policy Optimization (AEPO). Specifically, the AEPO framework consists of two major components: (1) Reflection-aware Information Filtration, which quantifies the cognitive information flow and prevents the final answer from being affected by earlier bad cognitive information; (2) Adaptive-Entropy Optimization, which dynamically balances exploration and exploitation across different reasoning stages, promoting both reflective diversity and answer correctness. Extensive experiments demonstrate that AEPO consistently achieves state-of-the-art performance over mainstream reinforcement learning baselines across diverse benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Mitigating Echo Reflection where LLMs repeat reasoning without new insights
Addressing uncontrolled information flow that distorts final decisions
Solving insufficient knowledge exploration during reflection stages in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Entropy Policy Optimization for reasoning enhancement
Reflection-aware Information Filtration to control cognitive flow
Adaptive-Entropy Optimization balances exploration and exploitation
C
Chen He
University of Electronic Science and Technology of China
X
Xun Jiang
University of Electronic Science and Technology of China
L
Lei Wang
Salesforce AI Research
H
Hao Yang
Meituan
Chong Peng
Chong Peng
Qingdao University
机器学习、计算机视觉
Peng Yan
Peng Yan
Research Assistant of ZHAW, PhD student of UZH
Deep LearningTransfer LearningIntelligent Algorithm
F
Fumin Shen
University of Electronic Science and Technology of China
X
Xing Xu
School of Computer Science and Technology, Tongji University