🤖 AI Summary
While existing multimodal large language models (MLLMs) perform well on egocentric question answering (EgocentricQA) in everyday scenarios, they suffer from severe cross-domain generalization bottlenecks—particularly when deployed to domains with substantial visual and semantic distribution shifts, such as surgery, industrial inspection, extreme sports, and animal vision.
Method: We propose EgoCross, the first multimodal benchmark dedicated to cross-domain egocentric video question answering. It comprises approximately 1,000 QA pairs across four high-challenge domains and supports both open- and closed-ended evaluation. We introduce a novel cross-domain generalization paradigm, featuring a multi-task fine-grained evaluation framework grounded in video understanding, spatiotemporal localization, and counting capabilities, and conduct adaptation via fine-tuning and reinforcement learning.
Contribution/Results: Experiments reveal significant performance degradation of mainstream MLLMs on EgoCross, validating its rigor and utility as a benchmark. EgoCross establishes a new standard and actionable pathway for advancing generalization-aware MLLM research.
📝 Abstract
Recent advances in Multimodal Large Language Models (MLLMs) have significantly pushed the frontier of egocentric video question answering (EgocentricQA). However, existing benchmarks and studies are mainly limited to common daily activities such as cooking and cleaning. In contrast, real-world deployment inevitably encounters domain shifts, where target domains differ substantially in both visual style and semantic content. To bridge this gap, we introduce extbf{EgoCross}, a comprehensive benchmark designed to evaluate the cross-domain generalization of MLLMs in EgocentricQA. EgoCross covers four diverse and challenging domains, including surgery, industry, extreme sports, and animal perspective, representing realistic and high-impact application scenarios. It comprises approximately 1,000 QA pairs across 798 video clips, spanning four key QA tasks: prediction, recognition, localization, and counting. Each QA pair provides both OpenQA and CloseQA formats to support fine-grained evaluation. Extensive experiments show that most existing MLLMs, whether general-purpose or egocentric-specialized, struggle to generalize to domains beyond daily life, highlighting the limitations of current models. Furthermore, we conduct several pilot studies, eg, fine-tuning and reinforcement learning, to explore potential improvements. We hope EgoCross and our accompanying analysis will serve as a foundation for advancing domain-adaptive, robust egocentric video understanding. Data and codes will be released at: href{https://github.com/MyUniverse0726/EgoCross}{https://github.com/MyUniverse0726/EgoCross.}