IUT-Plug: A Plug-in tool for Interleaved Image-Text Generation

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Contemporary vision-language models (VLMs) suffer from contextual drift—including logical discontinuity, entity identity confusion, and stylistic inconsistency—during interleaved image-text generation, limiting their generalization in complex multimodal tasks. To address this, we propose the Image Understanding Tree (IUT), a hierarchical symbolic scene parsing structure, and design IUT-Plug, a dynamic extraction plugin that tightly couples cross-modal co-generation, narrative flow control, and image synthesis to systematically mitigate all three drift types. Evaluated on a novel, human-annotated benchmark of 3,000 image-text QA instances and a dynamic assessment protocol, our approach achieves significant improvements in logical consistency, entity fidelity, and stylistic stability across multiple standard datasets, with average accuracy gains of 12.6%. This work pioneers the integration of symbolic scene trees into VLM generation architectures, establishing an interpretable and controllable paradigm for multimodal contextual consistency modeling.

Technology Category

Application Category

📝 Abstract
Existing vision language models (VLMs), including GPT-4 and DALL-E, often struggle to preserve logic, object identity, and style in multimodal image-text generation. This limitation significantly hinders the generalization capability of VLMs in complex image-text input-output scenarios. To address this issue, we propose IUT-Plug, a module grounded in an Image Understanding Tree (IUT), which enhances existing interleaved VLMs through explicit structured reasoning, thereby mitigating context drift in logic, entity identity, and style. The proposed framework operates in two stages. (1) A dynamic IUT-Plug extraction module parses visual scenes into hierarchical symbolic structures. (2) A coordinated narrative-flow and image synthesis mechanism ensures cross-modal consistency. To evaluate our approach, we construct a novel benchmark based on 3,000 real human-generated question-answer pairs over fine-tuned large models, introducing a dynamic evaluation protocol for quantifying context drift in interleaved VLMs. Experimental results demonstrate that IUT-Plug not only improves accuracy on established benchmarks but also effectively alleviates the three critical forms of context drift across diverse multimodal question answering (QA) scenarios.
Problem

Research questions and friction points this paper is trying to address.

Addresses logic and style preservation in multimodal generation
Mitigates context drift in visual language models
Enhances cross-modal consistency through structured reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Image Understanding Tree for structured reasoning
Dynamic extraction of hierarchical symbolic structures
Coordinated narrative and image synthesis mechanism
🔎 Similar Papers
No similar papers found.
Z
Zeteng Lin
Hong Kong University of Science and Technology (Guangzhou)
Xingxing Li
Xingxing Li
GFZ
GPSGNSS precise positioning and orbit determinationGNSS data processingGNSS seismologyGNSS meteorology
W
Wen You
Hong Kong University of Science and Technology (Guangzhou)
Xiaoyang Li
Xiaoyang Li
Southern University of Science and Technology
Integrated-sensing-communication-computationedge intelligencenetwork optimization
Z
Zehan Lu
Hong Kong University of Science and Technology (Guangzhou)
Yujun Cai
Yujun Cai
NTU → Meta → Lecturer(Assistant Professor) @UQ
Multi-Modal PerceptionVision-Language Models
J
Jing Tang
Hong Kong University of Science and Technology (Guangzhou); Hong Kong University of Science and Technology