ComoRAG: A Cognitive-Inspired Memory-Organized RAG for Stateful Long Narrative Reasoning

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional RAG systems suffer from stateless, single-step retrieval, limiting their ability to model dynamic plot progression and evolving character relationships in long-form narratives. Method: We propose a state-aware iterative reasoning framework that introduces a dynamic working memory and a global memory pool, enabling cyclical updating through exploratory evidence acquisition and historical knowledge integration. Our approach incorporates LLM-driven exploratory query generation, multi-turn retrieval, and explicit runtime narrative state modeling. Contribution/Results: Evaluated on four long-narrative benchmarks exceeding 200K tokens, our method significantly outperforms strong baselines, achieving up to an 11% relative improvement—particularly excelling on complex questions requiring holistic narrative understanding. The framework endows RAG with human-like sustained reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Narrative comprehension on long stories and novels has been a challenging domain attributed to their intricate plotlines and entangled, often evolving relations among characters and entities. Given the LLM's diminished reasoning over extended context and high computational cost, retrieval-based approaches remain a pivotal role in practice. However, traditional RAG methods can fall short due to their stateless, single-step retrieval process, which often overlooks the dynamic nature of capturing interconnected relations within long-range context. In this work, we propose ComoRAG, holding the principle that narrative reasoning is not a one-shot process, but a dynamic, evolving interplay between new evidence acquisition and past knowledge consolidation, analogous to human cognition when reasoning with memory-related signals in the brain. Specifically, when encountering a reasoning impasse, ComoRAG undergoes iterative reasoning cycles while interacting with a dynamic memory workspace. In each cycle, it generates probing queries to devise new exploratory paths, then integrates the retrieved evidence of new aspects into a global memory pool, thereby supporting the emergence of a coherent context for the query resolution. Across four challenging long-context narrative benchmarks (200K+ tokens), ComoRAG outperforms strong RAG baselines with consistent relative gains up to 11% compared to the strongest baseline. Further analysis reveals that ComoRAG is particularly advantageous for complex queries requiring global comprehension, offering a principled, cognitively motivated paradigm for retrieval-based long context comprehension towards stateful reasoning. Our code is publicly released at https://github.com/EternityJune25/ComoRAG
Problem

Research questions and friction points this paper is trying to address.

Addresses narrative comprehension in long stories with complex plots
Overcomes stateless retrieval limitations in traditional RAG methods
Enables dynamic reasoning via iterative memory consolidation cycles
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic memory workspace for iterative reasoning cycles
Generates probing queries for exploratory paths
Integrates retrieved evidence into global memory pool
🔎 Similar Papers
No similar papers found.
J
Juyuan Wang
School of Future Technology, South China University of Technology
R
Rongchen Zhao
School of Future Technology, South China University of Technology
W
Wei Wei
Independent Researcher
Y
Yufeng Wang
School of Future Technology, South China University of Technology
Mo Yu
Mo Yu
WeChat AI, Tencent
NLPQuestion AnsweringInformation ExtractionMachine Reading ComprehensionStory Understanding
J
Jie Zhou
Pattern Recognition Center, WeChat AI, Tencent
J
Jin Xu
School of Future Technology, South China University of Technology, Pazhou Lab, Guangzhou
Liyan Xu
Liyan Xu
WeChat AI, Tencent
Natural Language ProcessingMachine Learning