Ego-EXTRA: video-language Egocentric Dataset for EXpert-TRAinee assistance

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language models (MLLMs) struggle to support high-quality video-language collaboration between experts and apprentices, particularly in real-time feedback and proactive assistance. To address this, we introduce the first expert-apprentice collaborative selfie-video-language dataset—comprising 50 hours of authentic procedural videos paired with bidirectional natural language dialogues—collected via a Wizard-of-OZ paradigm simulating a wearable intelligent assistant, under an expert-perspective-constrained interaction protocol. Leveraging this dataset, we construct over 15,000 high-quality visual question-answer pairs and formally define and release the first benchmark for evaluating expert-level assistive capabilities. Comprehensive evaluation reveals severe performance deficiencies of state-of-the-art MLLMs on this task. The dataset is publicly released, serving as a foundational resource for embodied intelligence and AI-assisted vocational training research.

Technology Category

Application Category

📝 Abstract
We present Ego-EXTRA, a video-language Egocentric Dataset for EXpert-TRAinee assistance. Ego-EXTRA features 50 hours of unscripted egocentric videos of subjects performing procedural activities (the trainees) while guided by real-world experts who provide guidance and answer specific questions using natural language. Following a ``Wizard of OZ'' data collection paradigm, the expert enacts a wearable intelligent assistant, looking at the activities performed by the trainee exclusively from their egocentric point of view, answering questions when asked by the trainee, or proactively interacting with suggestions during the procedures. This unique data collection protocol enables Ego-EXTRA to capture a high-quality dialogue in which expert-level feedback is provided to the trainee. Two-way dialogues between experts and trainees are recorded, transcribed, and used to create a novel benchmark comprising more than 15k high-quality Visual Question Answer sets, which we use to evaluate Multimodal Large Language Models. The results show that Ego-EXTRA is challenging and highlight the limitations of current models when used to provide expert-level assistance to the user. The Ego-EXTRA dataset is publicly available to support the benchmark of egocentric video-language assistants: https://fpv-iplab.github.io/Ego-EXTRA/.
Problem

Research questions and friction points this paper is trying to address.

Creating a video-language dataset for expert-trainee assistance
Evaluating multimodal models on expert-level procedural guidance tasks
Capturing high-quality dialogues from egocentric viewpoints for AI training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Egocentric video dataset with expert-trainee dialogues
Wizard of Oz protocol for natural language assistance simulation
Multimodal benchmark for evaluating expert-level AI assistants
🔎 Similar Papers
No similar papers found.
F
Francesco Ragusa
Department of Mathematics and Computer Science - University of Catania, Italy
Michele Mazzamuto
Michele Mazzamuto
Università degli Studi di Catania
artificial intelligence
R
Rosario Forte
Department of Mathematics and Computer Science - University of Catania, Italy
I
Irene D'Ambra
Department of Mathematics and Computer Science - University of Catania, Italy
J
James Fort
Meta Reality Labs Research, USA
Jakob Engel
Jakob Engel
Research Director, Meta Reality Labs Research
Computer Vision(egocentric) Machine PerceptionSLAMReconstruction
Antonino Furnari
Antonino Furnari
Assistant Professor at the University of Catania
Computer Vision
Giovanni Maria Farinella
Giovanni Maria Farinella
University of Catania
Computer VisionMachine Learning