From Instructions to Assistance: a Dataset Aligning Instruction Manuals with Assembly Videos for Evaluating Multimodal LLMs

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work evaluates the capability of current open-source multimodal large language models (MLMs) to provide real-time assistance in technical procedural tasks, with a focus on their limitations in understanding assembly instructions and aligning textual manuals with corresponding video actions. To this end, we introduce the first fine-grained Manual-to-Action Dataset (M2AD), which precisely aligns furniture assembly videos with their respective instruction steps. Leveraging M2AD, we systematically assess MLMs on key competencies including procedural comprehension, step tracking, and cross-modal referencing between text and images. Our findings reveal that, despite some models demonstrating preliminary procedural reasoning abilities, architectural and hardware constraints hinder their effective processing of multiple image inputs and complex interleaved text–image reasoning, thereby exposing significant gaps in their applicability to real-world technical assistance scenarios.

Technology Category

Application Category

📝 Abstract
The recent advancements introduced by Large Language Models (LLMs) have transformed how Artificial Intelligence (AI) can support complex, real world tasks, pushing research outside the text boundaries towards multi modal contexts and leading to Multimodal Large Language Models (MLMs). Given the current adoption of LLM based assistants in solving technical or domain specific problems, the natural continuation of this trend is to extend the input domains of these assistants exploiting MLMs. Ideally, these MLMs should be used as real time assistants in procedural tasks, hopefully integrating a view of the environment where the user being assisted is, or even better sharing the same point of view via Virtual Reality (VR) or Augmented Reality (AR) supports, to reason over the same scenario the user is experiencing. With this work, we aim at evaluating the quality of currently openly available MLMs to provide this kind of assistance on technical tasks. To this end, we annotated a data set of furniture assembly with step by step labels and manual references: the Manual to Action Dataset (M2AD). We used this dataset to assess (1) to which extent the reasoning abilities of MLMs can be used to reduce the need for detailed labelling, allowing for more efficient, cost effective annotation practices, (2) whether MLMs are able to track the progression of assembly steps (3) and whether MLMs can refer correctly to the instruction manual pages. Our results showed that while some models understand procedural sequences, their performance is limited by architectural and hardware constraints, highlighting the need for multi image and interleaved text image reasoning.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Large Language Models
Instruction Manuals
Assembly Videos
Procedural Tasks
Real-time Assistance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Large Language Models
Instruction-to-Action Alignment
Procedural Task Assistance
Manual-to-Video Dataset
Step-wise Reasoning
🔎 Similar Papers
2024-01-19IEEE Workshop/Winter Conference on Applications of Computer VisionCitations: 14