π€ AI Summary
Retrieval-augmented generation (RAG) suffers from performance degradation in long-context settings due to uncontrolled entropy growth and attention dilution. Method: This paper introduces the first entropy-engineering perspective for modeling contextual uncertainty in RAG, proposing a Balanced Entropy Engineering (BEE) framework grounded in the entropy-invariance principle. BEE decouples attention sensitivity from context length dependence, designs a zero-shot multi-importance estimation strategy, and dynamically optimizes the entropy-balancing factor via parameter-efficient fine-tuning. It reconstructs the attention mechanism to enable adaptive contextual entropy balancing. Contribution/Results: Experiments across multiple RAG benchmarks demonstrate that BEE effectively mitigates performance decay under long contexts, achieving significant improvements in generation quality and stability. The method exhibits strong generalization capability and deployment efficiency, offering a principled, scalable solution to entropy-related challenges in RAG.
π Abstract
With the rapid advancement of large language models (LLMs), retrieval-augmented generation (RAG) has emerged as a critical approach to supplement the inherent knowledge limitations of LLMs. However, due to the typically large volume of retrieved information, RAG tends to operate with long context lengths. From the perspective of entropy engineering, we identify unconstrained entropy growth and attention dilution due to long retrieval context as significant factors affecting RAG performance. In this paper, we propose the balanced entropy-engineered RAG (BEE-RAG) framework, which improves the adaptability of RAG systems to varying context lengths through the principle of entropy invariance. By leveraging balanced context entropy to reformulate attention dynamics, BEE-RAG separates attention sensitivity from context length, ensuring a stable entropy level. Building upon this, we introduce a zero-shot inference strategy for multi-importance estimation and a parameter-efficient adaptive fine-tuning mechanism to obtain the optimal balancing factor for different settings. Extensive experiments across multiple RAG tasks demonstrate the effectiveness of BEE-RAG.