๐ค AI Summary
This work addresses the limitation of single-perspective (egocentric or exocentric) modeling in video understanding by proposing a dual-perspective collaborative paradigm that approximates human multimodal perception. Methodologically, it systematically introduces three cross-perspective collaboration frameworks: egocentric-enhanced macro-understanding, exocentric-guided egocentric analysis, and joint temporal modeling of both perspectives; it further unifies evaluation dimensions for cross-perspective representation learning, alignment, and fusion. Contributions include: (1) a comprehensive survey of over 100 state-of-the-art works and rigorous benchmarking across major datasets; (2) the first open-source dual-perspective collaborative resource repository (hosted on GitHub); and (3) a clear identification of technical bottlenecks and evolutionary pathways. The work establishes both theoretical foundations and practical benchmarks for embodied intelligence and humanโmachine collaborative perception.
๐ Abstract
Perceiving the world from both egocentric (first-person) and exocentric (third-person) perspectives is fundamental to human cognition, enabling rich and complementary understanding of dynamic environments. In recent years, allowing the machines to leverage the synergistic potential of these dual perspectives has emerged as a compelling research direction in video understanding. In this survey, we provide a comprehensive review of video understanding from both exocentric and egocentric viewpoints. We begin by highlighting the practical applications of integrating egocentric and exocentric techniques, envisioning their potential collaboration across domains. We then identify key research tasks to realize these applications. Next, we systematically organize and review recent advancements into three main research directions: (1) leveraging egocentric data to enhance exocentric understanding, (2) utilizing exocentric data to improve egocentric analysis, and (3) joint learning frameworks that unify both perspectives. For each direction, we analyze a diverse set of tasks and relevant works. Additionally, we discuss benchmark datasets that support research in both perspectives, evaluating their scope, diversity, and applicability. Finally, we discuss limitations in current works and propose promising future research directions. By synthesizing insights from both perspectives, our goal is to inspire advancements in video understanding and artificial intelligence, bringing machines closer to perceiving the world in a human-like manner. A GitHub repo of related works can be found at https://github.com/ayiyayi/Awesome-Egocentric-and-Exocentric-Vision.