🤖 AI Summary
This work addresses a critical limitation in existing video reasoning approaches, which often force heterogeneous external knowledge into a unified attention space, leading to attention dilution and increased cognitive load. To overcome this, the authors propose a training-free, auditable video reasoning paradigm that first constructs a question-agnostic video knowledge graph. During inference, a hierarchical multi-agent mechanism retrieves the minimal sufficient subgraph, which is then rendered as visual frames and jointly processed with the original video through multimodal reasoning. This approach pioneers the integration of external knowledge in visual space, fundamentally reshaping knowledge representation and delivery while preserving an interpretable and traceable evidence chain. Extensive experiments on multiple public benchmarks—particularly on knowledge-intensive tasks—demonstrate consistent and significant performance gains, underscoring the pivotal role of knowledge presentation format in reasoning quality.
📝 Abstract
When video reasoning requires external knowledge, many systems with large multimodal models (LMMs) adopt retrieval augmentation to supply the missing context. Appending textual or multi-clip evidence, however, forces heterogeneous signals into a single attention space. We observe diluted attention and higher cognitive load even on non-long videos. The bottleneck is not only what to retrieve but how to represent and fuse external knowledge with the video backbone.We present Graph-to-Frame RAG (G2F-RAG), a training free and auditable paradigm that delivers knowledge in the visual space. On the offline stage, an agent builds a problem-agnostic video knowledge graph that integrates entities, events, spatial relations, and linked world knowledge. On the online stage, a hierarchical multi-agent controller decides whether external knowledge is needed, retrieves a minimal sufficient subgraph, and renders it as a single reasoning frame appended to the video. LMMs then perform joint reasoning in a unified visual domain. This design reduces cognitive load and leaves an explicit, inspectable evidence trail.G2F-RAG is plug-and-play across backbones and scales. It yields consistent gains on diverse public benchmarks, with larger improvements in knowledge-intensive settings. Ablations further confirm that knowledge representation and delivery matter. G2F-RAG reframes retrieval as visual space knowledge fusion for robust and interpretable video reasoning.