Mindscape-Aware Retrieval Augmented Generation for Improved Long Context Understanding

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RAG systems lack global semantic modeling capabilities, hindering effective integration of dispersed evidence across long documents and impeding cross-paragraph reasoning. To address this, we propose the first RAG framework explicitly endowed with “mental map” awareness—introducing the psychological concept of mental maps into retrieval-augmented generation. Our approach constructs document-level semantic representations via hierarchical summarization and leverages them to jointly guide retrieval and generation through: (i) mental-map-conditioned query embeddings; (ii) a global-local hybrid attention mechanism; and (iii) a bilingual long-context evaluation framework. Extensive experiments on diverse long-text and bilingual benchmarks demonstrate consistent superiority over state-of-the-art RAG methods. Ablation studies and qualitative analysis confirm that our framework organically unifies fine-grained local details with holistic semantic structure, significantly enhancing evidence aggregation and human-like, multi-hop reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Humans understand long and complex texts by relying on a holistic semantic representation of the content. This global view helps organize prior knowledge, interpret new information, and integrate evidence dispersed across a document, as revealed by the Mindscape-Aware Capability of humans in psychology. Current Retrieval-Augmented Generation (RAG) systems lack such guidance and therefore struggle with long-context tasks. In this paper, we propose Mindscape-Aware RAG (MiA-RAG), the first approach that equips LLM-based RAG systems with explicit global context awareness. MiA-RAG builds a mindscape through hierarchical summarization and conditions both retrieval and generation on this global semantic representation. This enables the retriever to form enriched query embeddings and the generator to reason over retrieved evidence within a coherent global context. We evaluate MiA-RAG across diverse long-context and bilingual benchmarks for evidence-based understanding and global sense-making. It consistently surpasses baselines, and further analysis shows that it aligns local details with a coherent global representation, enabling more human-like long-context retrieval and reasoning.
Problem

Research questions and friction points this paper is trying to address.

Enhancing long-context understanding in RAG systems
Providing global semantic guidance for retrieval and generation
Aligning local details with coherent holistic representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Builds mindscape via hierarchical summarization
Conditions retrieval and generation on global semantic representation
Enables enriched query embeddings and coherent reasoning
🔎 Similar Papers
No similar papers found.