Can Multi-Modal LLMs Provide Live Step-by-Step Task Guidance?

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) face challenges in real-time, interactive step-by-step task guidance—particularly in synchronizing instruction generation, execution-state awareness, and error detection. To address this, we introduce Qualcomm Interactive Cooking, the first benchmark explicitly designed for real-time instructional guidance, featuring temporally aligned fine-grained instructions and precisely timestamped error-event annotations. We further propose LiveMamba, a streaming multimodal model supporting asynchronous video input and low-latency response generation. Leveraging an extended version of CaptainCook4D, we construct a cooking video dataset with comprehensive error annotations. Systematic evaluation of state-of-the-art MLLMs on this benchmark demonstrates LiveMamba’s significant advantages in instruction accuracy, response timeliness, and error feedback efficacy. This work establishes a reproducible, quantitative evaluation paradigm and a technical baseline for embodied intelligent assistants in real-time guidance scenarios.

Technology Category

Application Category

📝 Abstract
Multi-modal Large Language Models (LLM) have advanced conversational abilities but struggle with providing live, interactive step-by-step guidance, a key capability for future AI assistants. Effective guidance requires not only delivering instructions but also detecting their successful execution, as well as identifying and alerting users to mistakes, all of which has to happen in real-time. This requires models that are not turn-based, but that can react asynchronously to a video stream, as well as video data showing users performing tasks including mistakes and their corrections. To this end, we introduce Qualcomm Interactive Cooking, a new benchmark and dataset built upon CaptainCook4D, which contains user mistakes during task execution. Our dataset and benchmark features densely annotated, timed instructions and feedback messages, specifically including mistake alerts precisely timestamped to their visual occurrence in the video. We evaluate state-of-the-art multi-modal LLMs on the Qualcomm Interactive Cooking benchmark and introduce LiveMamba, a streaming multi-modal LLM designed for interactive instructional guidance. This work provides the first dedicated benchmark and a strong baseline for developing and evaluating on live, situated coaching.
Problem

Research questions and friction points this paper is trying to address.

Provide live step-by-step interactive task guidance
Detect successful execution and identify user mistakes in real-time
React asynchronously to video streams for situated coaching
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces LiveMamba streaming multimodal LLM for real-time guidance
Uses Qualcomm Interactive Cooking benchmark with mistake-timestamped video data
Enables asynchronous video stream reaction for step-by-step coaching
🔎 Similar Papers
No similar papers found.