🤖 AI Summary
To address the challenge of fine-grained action step recognition in first-person videos—complicated by dynamic backgrounds, frequent motion, and object occlusions—this paper proposes a cross-view aligned heterogeneous graph learning framework. It constructs a sparse heterogeneous graph where video segments serve as nodes, integrating multimodal features from first-person and third-person videos, textual narrations, depth maps, and object detection labels; step recognition is formulated as a node classification task. The key innovation lies in the first-ever flexible graph-based alignment across viewing perspectives, enabling efficient sparse graph construction and joint multi-view, multimodal reasoning. On the Ego-Exo4D benchmark, our method achieves a 12-percentage-point accuracy improvement over state-of-the-art approaches. Ablation studies quantitatively validate the distinct contributions of each modality, while the graph structure demonstrates both strong representational capacity and computational efficiency.
📝 Abstract
Egocentric videos capture scenes from a wearer's viewpoint, resulting in dynamic backgrounds, frequent motion, and occlusions, posing challenges to accurate keystep recognition. We propose a flexible graph-learning framework for fine-grained keystep recognition that is able to effectively leverage long-term dependencies in egocentric videos, and leverage alignment between egocentric and exocentric videos during training for improved inference on egocentric videos. Our approach consists of constructing a graph where each video clip of the egocentric video corresponds to a node. During training, we consider each clip of each exocentric video (if available) as additional nodes. We examine several strategies to define connections across these nodes and pose keystep recognition as a node classification task on the constructed graphs. We perform extensive experiments on the Ego-Exo4D dataset and show that our proposed flexible graph-based framework notably outperforms existing methods by more than 12 points in accuracy. Furthermore, the constructed graphs are sparse and compute efficient. We also present a study examining on harnessing several multimodal features, including narrations, depth, and object class labels, on a heterogeneous graph and discuss their corresponding contribution to the keystep recognition performance.