X-LeBench: A Benchmark for Extremely Long Egocentric Video Understanding

📅 2025-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language models (MLLMs) exhibit severe performance degradation—averaging below 35%—on ultra-long first-person videos (23 minutes to 16.4 hours), exposing fundamental bottlenecks in temporal modeling, memory compression, and cross-clip reasoning. To address this, we introduce X-LeBench, the first benchmark specifically designed for ultra-long temporal first-person video understanding. Our method leverages LLM-coordinated life-log synthesis: aligning synthetically generated daily plans with spatiotemporal segments of real Ego4D videos to produce 432 context-rich, hour-scale video life logs. We propose a novel hour-level evaluation paradigm integrating multimodal prompt engineering and synthetic annotation techniques. Extensive experiments systematically uncover critical limitations of state-of-the-art models, establishing X-LeBench as a reproducible benchmark and foundational methodology for advancing ultra-long video understanding.

Technology Category

Application Category

📝 Abstract
Long-form egocentric video understanding provides rich contextual information and unique insights into long-term human behaviors, holding significant potential for applications in embodied intelligence, long-term activity analysis, and personalized assistive technologies. However, existing benchmark datasets primarily focus on single, short-duration videos or moderately long videos up to dozens of minutes, leaving a substantial gap in evaluating extensive, ultra-long egocentric video recordings. To address this, we introduce X-LeBench, a novel benchmark dataset specifically crafted for evaluating tasks on extremely long egocentric video recordings. Leveraging the advanced text processing capabilities of large language models (LLMs), X-LeBench develops a life-logging simulation pipeline that produces realistic, coherent daily plans aligned with real-world video data. This approach enables the flexible integration of synthetic daily plans with real-world footage from Ego4D-a massive-scale egocentric video dataset covers a wide range of daily life scenarios-resulting in 432 simulated video life logs that mirror realistic daily activities in contextually rich scenarios. The video life-log durations span from 23 minutes to 16.4 hours. The evaluation of several baseline systems and multimodal large language models (MLLMs) reveals their poor performance across the board, highlighting the inherent challenges of long-form egocentric video understanding and underscoring the need for more advanced models.
Problem

Research questions and friction points this paper is trying to address.

Long Video Analysis
Model Enhancement
Selfie Video Processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

X-LeBench
Long-duration Video Understanding
Real-world Video Logs
🔎 Similar Papers
No similar papers found.
Wenqi Zhou
Wenqi Zhou
Associate Professor of Information Systems Management, Duquesne University
BusinessInformation SystemsMarketingE-commerceBayesian Modeling
K
Kai Cao
University of Manchester
H
Hao Zheng
X-Intelligence Labs
Xinyi Zheng
Xinyi Zheng
PhD in Computer Science, University of Bristol
M
Miao Liu
Meta
P
P. O. Kristensson
University of Cambridge
W
Walterio W. Mayol-Cuevas
University of Bristol
F
Fan Zhang
University of Bristol
Weizhe Lin
Weizhe Lin
University of Cambridge
Natural Language ProcessingAffectie ComputingComputer Vision
J
Junxiao Shen
University of Bristol, X-Intelligence Labs