Bootstrap Your Own Views: Masked Ego-Exo Modeling for Fine-grained View-invariant Video Representations

📅 2025-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak generalization caused by view bias in cross-perspective understanding between egocentric and exocentric videos, this paper proposes BYOV, a self-supervised framework that requires no paired data. BYOV jointly models temporal causality and cross-view feature alignment via dual prediction: self-view masking (within the same perspective) and cross-view masking (between perspectives). It introduces a novel action-compositional representation design to enable fine-grained, view-invariant video representation learning. The method integrates masked spatiotemporal modeling, contrastive learning, and explicit cross-view alignment. Evaluated on four downstream ego-exo tasks—including action recognition, viewpoint estimation, cross-view retrieval, and egocentric action anticipation—BYOV consistently outperforms state-of-the-art methods, achieving significant improvements across multiple metrics. The code is publicly available.

Technology Category

Application Category

📝 Abstract
View-invariant representation learning from egocentric (first-person, ego) and exocentric (third-person, exo) videos is a promising approach toward generalizing video understanding systems across multiple viewpoints. However, this area has been underexplored due to the substantial differences in perspective, motion patterns, and context between ego and exo views. In this paper, we propose a novel masked ego-exo modeling that promotes both causal temporal dynamics and cross-view alignment, called Bootstrap Your Own Views (BYOV), for fine-grained view-invariant video representation learning from unpaired ego-exo videos. We highlight the importance of capturing the compositional nature of human actions as a basis for robust cross-view understanding. Specifically, self-view masking and cross-view masking predictions are designed to learn view-invariant and powerful representations concurrently. Experimental results demonstrate that our BYOV significantly surpasses existing approaches with notable gains across all metrics in four downstream ego-exo video tasks. The code is available at https://github.com/park-jungin/byov.
Problem

Research questions and friction points this paper is trying to address.

Learn view-invariant representations from unpaired ego-exo videos
Address perspective and motion differences between ego-exo views
Capture compositional human actions for cross-view understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Masked ego-exo modeling for view-invariant learning
Self-view and cross-view masking predictions
Fine-grained video representation from unpaired videos
🔎 Similar Papers
No similar papers found.