🤖 AI Summary
Contemporary vision-language models (VLMs) suffer from contextual drift—including logical discontinuity, entity identity confusion, and stylistic inconsistency—during interleaved image-text generation, limiting their generalization in complex multimodal tasks. To address this, we propose the Image Understanding Tree (IUT), a hierarchical symbolic scene parsing structure, and design IUT-Plug, a dynamic extraction plugin that tightly couples cross-modal co-generation, narrative flow control, and image synthesis to systematically mitigate all three drift types. Evaluated on a novel, human-annotated benchmark of 3,000 image-text QA instances and a dynamic assessment protocol, our approach achieves significant improvements in logical consistency, entity fidelity, and stylistic stability across multiple standard datasets, with average accuracy gains of 12.6%. This work pioneers the integration of symbolic scene trees into VLM generation architectures, establishing an interpretable and controllable paradigm for multimodal contextual consistency modeling.
📝 Abstract
Existing vision language models (VLMs), including GPT-4 and DALL-E, often struggle to preserve logic, object identity, and style in multimodal image-text generation. This limitation significantly hinders the generalization capability of VLMs in complex image-text input-output scenarios. To address this issue, we propose IUT-Plug, a module grounded in an Image Understanding Tree (IUT), which enhances existing interleaved VLMs through explicit structured reasoning, thereby mitigating context drift in logic, entity identity, and style. The proposed framework operates in two stages. (1) A dynamic IUT-Plug extraction module parses visual scenes into hierarchical symbolic structures. (2) A coordinated narrative-flow and image synthesis mechanism ensures cross-modal consistency. To evaluate our approach, we construct a novel benchmark based on 3,000 real human-generated question-answer pairs over fine-tuned large models, introducing a dynamic evaluation protocol for quantifying context drift in interleaved VLMs. Experimental results demonstrate that IUT-Plug not only improves accuracy on established benchmarks but also effectively alleviates the three critical forms of context drift across diverse multimodal question answering (QA) scenarios.