Interleaved Scene Graph for Interleaved Text-and-Image Generation Assessment

📅 2024-11-26
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of evaluating intra-modal and cross-modal consistency in interleaved text-image generation. Methodologically, it proposes ISG—a multi-granularity evaluation framework featuring: (1) a novel four-level joint assessment paradigm (global, structural, block-level, and image-level); (2) scene graph–based modeling of fine-grained cross-modal relationships between textual and visual blocks; (3) an interpretable question-answering feedback mechanism; and (4) ISG-Bench—the first benchmark capturing complex language-vision dependencies (1,150 samples)—alongside the baseline agent ISG-Agent. Experiments reveal that current unified multimodal models perform poorly on such interleaved generation tasks. Composite multi-granularity evaluation improves overall performance by 111%, while ISG-Agent achieves a 122% gain over baselines, significantly enhancing block-level and image-level consistency.

Technology Category

Application Category

📝 Abstract
Many real-world user queries (e.g."How do to make egg fried rice?") could benefit from systems capable of generating responses with both textual steps with accompanying images, similar to a cookbook. Models designed to generate interleaved text and images face challenges in ensuring consistency within and across these modalities. To address these challenges, we present ISG, a comprehensive evaluation framework for interleaved text-and-image generation. ISG leverages a scene graph structure to capture relationships between text and image blocks, evaluating responses on four levels of granularity: holistic, structural, block-level, and image-specific. This multi-tiered evaluation allows for a nuanced assessment of consistency, coherence, and accuracy, and provides interpretable question-answer feedback. In conjunction with ISG, we introduce a benchmark, ISG-Bench, encompassing 1,150 samples across 8 categories and 21 subcategories. This benchmark dataset includes complex language-vision dependencies and golden answers to evaluate models effectively on vision-centric tasks such as style transfer, a challenging area for current models. Using ISG-Bench, we demonstrate that recent unified vision-language models perform poorly on generating interleaved content. While compositional approaches that combine separate language and image models show a 111% improvement over unified models at the holistic level, their performance remains suboptimal at both block and image levels. To facilitate future work, we develop ISG-Agent, a baseline agent employing a"plan-execute-refine"pipeline to invoke tools, achieving a 122% performance improvement.
Problem

Research questions and friction points this paper is trying to address.

Evaluating consistency in interleaved text-image generation
Assessing multi-modal coherence at varying granularity levels
Benchmarking performance on vision-language integration tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

ISG framework evaluates text-image consistency
ISG-Bench benchmark tests vision-language models
ISG-Agent uses plan-execute-refine pipeline
🔎 Similar Papers
No similar papers found.