$I^2G$: Generating Instructional Illustrations via Text-Conditioned Diffusion

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that plain-text instructions inadequately represent complex physical actions and spatial relationships. Methodologically, we propose a text-driven procedural visual instruction generation framework featuring: (1) a fine-grained syntactic parser–based text encoder that explicitly models actions, objects, and spatial constraints; (2) a pairwise discourse coherence mechanism to ensure temporal logical consistency across multi-step instructions; and (3) a novel evaluation protocol tailored for procedural language–image alignment. Technically, our approach integrates text-conditional diffusion modeling with multi-stage structured parsing. Evaluated on HTStep, CaptainCook4D, and WikiAll benchmarks, our method significantly outperforms prior approaches, yielding generated images with improved semantic accuracy, step-wise coherence, and spatial plausibility. This work establishes an interpretable and quantitatively evaluable paradigm for instruction visualization in embodied intelligence.

Technology Category

Application Category

📝 Abstract
The effective communication of procedural knowledge remains a significant challenge in natural language processing (NLP), as purely textual instructions often fail to convey complex physical actions and spatial relationships. We address this limitation by proposing a language-driven framework that translates procedural text into coherent visual instructions. Our approach models the linguistic structure of instructional content by decomposing it into goal statements and sequential steps, then conditioning visual generation on these linguistic elements. We introduce three key innovations: (1) a constituency parser-based text encoding mechanism that preserves semantic completeness even with lengthy instructions, (2) a pairwise discourse coherence model that maintains consistency across instruction sequences, and (3) a novel evaluation protocol specifically designed for procedural language-to-image alignment. Our experiments across three instructional datasets (HTStep, CaptainCook4D, and WikiAll) demonstrate that our method significantly outperforms existing baselines in generating visuals that accurately reflect the linguistic content and sequential nature of instructions. This work contributes to the growing body of research on grounding procedural language in visual content, with applications spanning education, task guidance, and multimodal language understanding.
Problem

Research questions and friction points this paper is trying to address.

Translating procedural text into visual instructions
Preserving semantic completeness in lengthy instructions
Ensuring consistency across sequential instruction steps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Text encoding with constituency parser for semantic completeness
Pairwise discourse model for sequential consistency
Novel evaluation protocol for language-image alignment
🔎 Similar Papers
No similar papers found.