Bridging Perspectives: A Survey on Cross-view Collaborative Intelligence with Egocentric-Exocentric Vision

๐Ÿ“… 2025-06-06
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limitation of single-perspective (egocentric or exocentric) modeling in video understanding by proposing a dual-perspective collaborative paradigm that approximates human multimodal perception. Methodologically, it systematically introduces three cross-perspective collaboration frameworks: egocentric-enhanced macro-understanding, exocentric-guided egocentric analysis, and joint temporal modeling of both perspectives; it further unifies evaluation dimensions for cross-perspective representation learning, alignment, and fusion. Contributions include: (1) a comprehensive survey of over 100 state-of-the-art works and rigorous benchmarking across major datasets; (2) the first open-source dual-perspective collaborative resource repository (hosted on GitHub); and (3) a clear identification of technical bottlenecks and evolutionary pathways. The work establishes both theoretical foundations and practical benchmarks for embodied intelligence and humanโ€“machine collaborative perception.

Technology Category

Application Category

๐Ÿ“ Abstract
Perceiving the world from both egocentric (first-person) and exocentric (third-person) perspectives is fundamental to human cognition, enabling rich and complementary understanding of dynamic environments. In recent years, allowing the machines to leverage the synergistic potential of these dual perspectives has emerged as a compelling research direction in video understanding. In this survey, we provide a comprehensive review of video understanding from both exocentric and egocentric viewpoints. We begin by highlighting the practical applications of integrating egocentric and exocentric techniques, envisioning their potential collaboration across domains. We then identify key research tasks to realize these applications. Next, we systematically organize and review recent advancements into three main research directions: (1) leveraging egocentric data to enhance exocentric understanding, (2) utilizing exocentric data to improve egocentric analysis, and (3) joint learning frameworks that unify both perspectives. For each direction, we analyze a diverse set of tasks and relevant works. Additionally, we discuss benchmark datasets that support research in both perspectives, evaluating their scope, diversity, and applicability. Finally, we discuss limitations in current works and propose promising future research directions. By synthesizing insights from both perspectives, our goal is to inspire advancements in video understanding and artificial intelligence, bringing machines closer to perceiving the world in a human-like manner. A GitHub repo of related works can be found at https://github.com/ayiyayi/Awesome-Egocentric-and-Exocentric-Vision.
Problem

Research questions and friction points this paper is trying to address.

Integrating first-person and third-person views for video understanding
Enhancing exocentric analysis using egocentric data and vice versa
Developing joint learning frameworks for unified perspective perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging egocentric data enhances exocentric understanding
Utilizing exocentric data improves egocentric analysis
Joint learning frameworks unify both perspectives
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yuping He
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
Y
Yifei Huang
University of Tokyo, Tokyo, Japan
G
Guo Chen
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
Lidong Lu
Lidong Lu
Nanjing University
Multimodal Large Language Model
Baoqi Pei
Baoqi Pei
Zhejiang University
Computer VisionMultimodal Learning
Jilan Xu
Jilan Xu
Fudan University
Computer VisionMultimodalMedical Image Analysis
T
Tong Lu
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
Yoichi Sato
Yoichi Sato
Professor, Institute of Industrial Science, The University of Tokyo
Computer VisionHuman Computer Interaction