🤖 AI Summary
This work identifies and names a novel failure mode—“echo reflection”—in large language models (LLMs) during complex domain reasoning, wherein models mechanically reproduce earlier reasoning steps during reflection without incorporating new information or achieving deeper insight. To address this, we propose the Adaptive Entropy Policy Optimization (AEPO) framework, introducing (i) a reflection-aware information filtering mechanism and (ii) a cognitive-entropy-driven dynamic exploration strategy to suppress erroneous cognitive propagation and stimulate deep knowledge retrieval. AEPO is grounded in reinforcement learning and integrates quantified information flow modeling, a learnable filtering module, and an adaptive exploration–exploitation trade-off mechanism. Evaluated on multiple challenging reasoning benchmarks, AEPO significantly outperforms state-of-the-art RL-based methods, achieves new SOTA performance, and—crucially—systematically mitigates echo reflection for the first time, thereby substantially enhancing LLMs’ reflection quality and reasoning depth.
📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable performance across a wide range of reasoning tasks. Recent methods have further improved LLM performance in complex mathematical reasoning. However, when extending these methods beyond the domain of mathematical reasoning to tasks involving complex domain-specific knowledge, we observe a consistent failure of LLMs to generate novel insights during the reflection stage. Instead of conducting genuine cognitive refinement, the model tends to mechanically reiterate earlier reasoning steps without introducing new information or perspectives, a phenomenon referred to as"Echo Reflection". We attribute this behavior to two key defects: (1) Uncontrollable information flow during response generation, which allows premature intermediate thoughts to propagate unchecked and distort final decisions; (2) Insufficient exploration of internal knowledge during reflection, leading to repeating earlier findings rather than generating new cognitive insights. Building on these findings, we proposed a novel reinforcement learning method termed Adaptive Entropy Policy Optimization (AEPO). Specifically, the AEPO framework consists of two major components: (1) Reflection-aware Information Filtration, which quantifies the cognitive information flow and prevents the final answer from being affected by earlier bad cognitive information; (2) Adaptive-Entropy Optimization, which dynamically balances exploration and exploitation across different reasoning stages, promoting both reflective diversity and answer correctness. Extensive experiments demonstrate that AEPO consistently achieves state-of-the-art performance over mainstream reinforcement learning baselines across diverse benchmarks.