🤖 AI Summary
To address low recall accuracy in large-scale memory-augmented question answering under semantically similar scenarios, this paper proposes the Associative Memory Graph (AMG) framework. AMG automatically extracts salient cues from dialogue utterances and constructs a graph structure anchored on these cues, integrating multi-dimensional signals—including relevance, importance, and temporal alignment. It further introduces an adaptive, mutual-information-driven fusion mechanism to enable context-aware memory retrieval and importance-aware ranking. Unlike conventional approaches relying solely on semantic distance, AMG significantly improves recall precision within dense semantic spaces. Evaluated on three public benchmarks and a newly constructed MeetingQA dataset, AMG consistently outperforms state-of-the-art models across all metrics, demonstrating both effectiveness and strong generalization capability in complex conversational memory retrieval tasks.
📝 Abstract
Accurate recall from large scale memories remains a core challenge for memory augmented AI assistants performing question answering (QA), especially in similarity dense scenarios where existing methods mainly rely on semantic distance to the query for retrieval. Inspired by how humans link information associatively, we propose AssoMem, a novel framework constructing an associative memory graph that anchors dialogue utterances to automatically extracted clues. This structure provides a rich organizational view of the conversational context and facilitates importance aware ranking. Further, AssoMem integrates multi-dimensional retrieval signals-relevance, importance, and temporal alignment using an adaptive mutual information (MI) driven fusion strategy. Extensive experiments across three benchmarks and a newly introduced dataset, MeetingQA, demonstrate that AssoMem consistently outperforms SOTA baselines, verifying its superiority in context-aware memory recall.