🤖 AI Summary
Existing evaluation methods for multimodal systems in procedural activities (e.g., cooking) are limited to static tasks such as action classification, lacking systematic, application-oriented assessment. To address this gap, we propose ProMQA—the first multimodal question-answering benchmark dedicated to procedural activity understanding—comprising 401 video–instruction pairs and LLM-augmented, human-verified QA pairs. We introduce a procedural-activity–oriented QA evaluation paradigm and design a human-in-the-loop, low-cost, high-reliability annotation framework. Extensive zero-shot and fine-tuned evaluations on state-of-the-art multimodal models—including proprietary ones—reveal substantial performance gaps relative to human baselines, confirming the task’s inherent difficulty and the benchmark’s diagnostic value.
📝 Abstract
Multimodal systems have great potential to assist humans in procedural activities, where people follow instructions to achieve their goals. Despite diverse application scenarios, systems are typically evaluated on traditional classification tasks, e.g., action recognition or temporal action segmentation. In this paper, we present a novel evaluation dataset, ProMQA, to measure system advancements in application-oriented scenarios. ProMQA consists of 401 multimodal procedural QA pairs on user recording of procedural activities coupled with their corresponding instruction. For QA annotation, we take a cost-effective human-LLM collaborative approach, where the existing annotation is augmented with LLM-generated QA pairs that are later verified by humans. We then provide the benchmark results to set the baseline performance on ProMQA. Our experiment reveals a significant gap between human performance and that of current systems, including competitive proprietary multimodal models. We hope our dataset sheds light on new aspects of models' multimodal understanding capabilities.