🤖 AI Summary
This work addresses the semantic fragmentation and limited reasoning capabilities inherent in existing approaches to ultra-long-duration (multi-day) first-person videos, which stem from localized processing and constrained temporal modeling. To overcome these limitations, we propose a training-free framework for dynamically constructing temporal knowledge graphs that unify the representation of core entities—such as people, objects, locations, and events—and explicitly model their attributes, interactions, and temporal relationships in a structured manner. This enables long-term memory accumulation and complex cross-entity reasoning over extended time horizons. As the first training-free paradigm for long-term semantic integration in first-person video understanding, our method achieves state-of-the-art performance on the EgoLifeQA and EgoR1-bench benchmarks, substantially surpassing the constraints of conventional clip-level models.
📝 Abstract
Ultra-long egocentric videos spanning multiple days present significant challenges for video understanding. Existing approaches still rely on fragmented local processing and limited temporal modeling, restricting their ability to reason over such extended sequences. To address these limitations, we introduce EgoGraph, a training-free and dynamic knowledge-graph construction framework that explicitly encodes long-term, cross-entity dependencies in egocentric video streams. EgoGraph employs a novel egocentric schema that unifies the extraction and abstraction of core entities, such as people, objects, locations, and events, and structurally reasons about their attributes and interactions, yielding a significantly richer and more coherent semantic representation than traditional clip-based video models. Crucially, we develop a temporal relational modeling strategy that captures temporal dependencies across entities and accumulates stable long-term memory over multiple days, enabling complex temporal reasoning. Extensive experiments on the EgoLifeQA and EgoR1-bench benchmarks demonstrate that EgoGraph achieves state-of-the-art performance on long-term video question answering, validating its effectiveness as a new paradigm for ultra-long egocentric video understanding.