🤖 AI Summary
Static retrieval indices incur multi-hop redundancy upon repeated queries, leading to increased latency and computational overhead. This work proposes a training-free, dynamic memory-augmented framework that accumulates retrieval experiences from semantically similar queries to enable sentence-level memory updates within a lightweight, relation-agnostic index structure. Inspired by cognitive neuroscience, the approach incorporates an uncertainty-aware Kalman-style gain mechanism to balance stability and adaptability during memory refinement. Experimental results demonstrate an average performance improvement of 3.95%, rising to 8.19% after five rounds of memory accumulation, while simultaneously reducing inference costs by 61%.
📝 Abstract
Retrieval-Augmented Generation (RAG) grounds large language models with external evidence, but many implementations rely on pre-built indices that remain static after construction. Related queries therefore repeat similar multi-hop traversal, increasing latency and compute. Motivated by schema-based learning in cognitive neuroscience, we propose GAM-RAG, a training-free framework that accumulates retrieval experience from recurring or related queries and updates retrieval memory over time. GAM-RAG builds a lightweight, relation-free hierarchical index whose links capture potential co-occurrence rather than fixed semantic relations. During inference, successful retrieval episodes provide sentence-level feedback, updating sentence memories so evidence useful for similar reasoning types becomes easier to activate later. To balance stability and adaptability under noisy feedback, we introduce an uncertainty-aware, Kalman-inspired gain rule that jointly updates memory states and perplexity-based uncertainty estimates. It applies fast updates for reliable novel signals and conservative refinement for stable or noisy memories. We provide a theoretical analysis of the update dynamics, and empirically show that GAM-RAG improves average performance by 3.95% over the strongest baseline and by 8.19% with 5-turn memory, while reducing inference cost by 61%. Our code and datasets are available at: https://anonymous.4open.science/r/GAM_RAG-2EF6.