🤖 AI Summary
Current text-to-image (T2I) models struggle to support multi-round, dynamic image editing by non-expert users, primarily due to their reliance on expert-level prompt engineering and insufficient modeling of output modality recognition and image coherence within multimodal dialogues. To address this, we propose the Multimodal Interactive Dialogue System (MIDS), a three-stage collaborative framework—“drawing prompt alignment,” “fine-grained data construction,” and “error correction”—that integrates off-the-shelf multimodal large language models (MLLMs) with T2I models. It enables end-to-end optimization via cross-modal alignment training and feedback-driven iterative correction. We further introduce DialogBen, the first bilingual multimodal dialogue benchmark supporting joint evaluation of modality-switching accuracy and image-editing coherence. Experiments demonstrate that MIDS consistently outperforms state-of-the-art methods on DialogBen and in user studies, achieving superior performance in modality recognition accuracy, image consistency, and user satisfaction.
📝 Abstract
Text-to-image (T2I) generation models have significantly advanced in recent years. However, effective interaction with these models is challenging for average users due to the need for specialized prompt engineering knowledge and the inability to perform multi-turn image generation, hindering a dynamic and iterative creation process. Recent attempts have tried to equip Multi-modal Large Language Models (MLLMs) with T2I models to bring the user's natural language instructions into reality. Hence, the output modality of MLLMs is extended, and the multi-turn generation quality of T2I models is enhanced thanks to the strong multi-modal comprehension ability of MLLMs. However, many of these works face challenges in identifying correct output modalities and generating coherent images accordingly as the number of output modalities increases and the conversations go deeper. Therefore, we propose DialogGen, an effective pipeline to align off-the-shelf MLLMs and T2I models to build a Multi-modal Interactive Dialogue System (MIDS) for multi-turn Text-to-Image generation. It is composed of drawing prompt alignment, careful training data curation, and error correction. Moreover, as the field of MIDS flourishes, comprehensive benchmarks are urgently needed to evaluate MIDS fairly in terms of output modality correctness and multi-modal output coherence. To address this issue, we introduce the Multi-modal Dialogue Benchmark (DialogBen), a comprehensive bilingual benchmark designed to assess the ability of MLLMs to generate accurate and coherent multi-modal content that supports image editing. It contains two evaluation metrics to measure the model's ability to switch modalities and the coherence of the output images. Our extensive experiments on DialogBen and user study demonstrate the effectiveness of DialogGen compared with other State-of-the-Art models.