EgoCross: Benchmarking Multimodal Large Language Models for Cross-Domain Egocentric Video Question Answering

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
While existing multimodal large language models (MLLMs) perform well on egocentric question answering (EgocentricQA) in everyday scenarios, they suffer from severe cross-domain generalization bottlenecks—particularly when deployed to domains with substantial visual and semantic distribution shifts, such as surgery, industrial inspection, extreme sports, and animal vision. Method: We propose EgoCross, the first multimodal benchmark dedicated to cross-domain egocentric video question answering. It comprises approximately 1,000 QA pairs across four high-challenge domains and supports both open- and closed-ended evaluation. We introduce a novel cross-domain generalization paradigm, featuring a multi-task fine-grained evaluation framework grounded in video understanding, spatiotemporal localization, and counting capabilities, and conduct adaptation via fine-tuning and reinforcement learning. Contribution/Results: Experiments reveal significant performance degradation of mainstream MLLMs on EgoCross, validating its rigor and utility as a benchmark. EgoCross establishes a new standard and actionable pathway for advancing generalization-aware MLLM research.

Technology Category

Application Category

📝 Abstract
Recent advances in Multimodal Large Language Models (MLLMs) have significantly pushed the frontier of egocentric video question answering (EgocentricQA). However, existing benchmarks and studies are mainly limited to common daily activities such as cooking and cleaning. In contrast, real-world deployment inevitably encounters domain shifts, where target domains differ substantially in both visual style and semantic content. To bridge this gap, we introduce extbf{EgoCross}, a comprehensive benchmark designed to evaluate the cross-domain generalization of MLLMs in EgocentricQA. EgoCross covers four diverse and challenging domains, including surgery, industry, extreme sports, and animal perspective, representing realistic and high-impact application scenarios. It comprises approximately 1,000 QA pairs across 798 video clips, spanning four key QA tasks: prediction, recognition, localization, and counting. Each QA pair provides both OpenQA and CloseQA formats to support fine-grained evaluation. Extensive experiments show that most existing MLLMs, whether general-purpose or egocentric-specialized, struggle to generalize to domains beyond daily life, highlighting the limitations of current models. Furthermore, we conduct several pilot studies, eg, fine-tuning and reinforcement learning, to explore potential improvements. We hope EgoCross and our accompanying analysis will serve as a foundation for advancing domain-adaptive, robust egocentric video understanding. Data and codes will be released at: href{https://github.com/MyUniverse0726/EgoCross}{https://github.com/MyUniverse0726/EgoCross.}
Problem

Research questions and friction points this paper is trying to address.

Evaluating MLLMs' cross-domain generalization for egocentric video QA
Testing model performance beyond daily activities to specialized domains
Assessing robustness across surgery, industry, sports, and animal perspectives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-domain benchmark for egocentric video QA
Evaluates MLLMs on diverse real-world domains
Includes fine-tuning and reinforcement learning studies
🔎 Similar Papers
No similar papers found.
Y
Yanjun Li
School of Computer Science and Technology, East China Normal University
Y
Yuqian Fu
INSAIT, Institute for Computer Science, Artificial Intelligence and Technology
Tianwen Qian
Tianwen Qian
East China Normal University
MultimediaVision and LanguageEmbodied AI
Qi'ao Xu
Qi'ao Xu
East China Normal University
S
Silong Dai
School of Computer Science and Technology, East China Normal University
Danda Pani Paudel
Danda Pani Paudel
INSAIT Sofia University
Computer VisionRoboticsEarth Observation
Luc Van Gool
Luc Van Gool
professor computer vision INSAIT Sofia University, em. KU Leuven, em. ETHZ, Toyota Lab TRACE
computer visionmachine learningAIautonomous carscultural heritage
X
Xiaoling Wang
School of Computer Science and Technology, East China Normal University