🤖 AI Summary
To address hallucination and performance degradation in streaming video understanding caused by multimodal large language models’ (MLLMs) reliance on predictive historical memory, this paper first identifies and formalizes the memory-driven hallucination phenomenon. We propose a hallucination-aware memory correction framework comprising streaming visual event modeling, predictive memory injection, online hallucination detection, and bias-mitigating memory refinement. Evaluated on multiple streaming video understanding benchmarks, our method significantly improves event reasoning accuracy while reducing memory-induced hallucination rates by over 40%, enabling more robust temporal event understanding. Our core contributions are threefold: (1) the first formal definition of memory-induced hallucination in MLLMs; (2) the design of a learnable, end-to-end memory correction paradigm; and (3) empirical validation of its effectiveness and generalizability in realistic streaming scenarios.
📝 Abstract
Multimodal large language models (MLLMs) have demonstrated strong performance in understanding videos holistically, yet their ability to process streaming videos-videos are treated as a sequence of visual events-remains underexplored. Intuitively, leveraging past events as memory can enrich contextual and temporal understanding of the current event. In this paper, we show that leveraging memories as contexts helps MLLMs better understand video events. However, because such memories rely on predictions of preceding events, they may contain misinformation, leading to confabulation and degraded performance. To address this, we propose a confabulation-aware memory modification method that mitigates confabulated memory for memory-enhanced event understanding.