🤖 AI Summary
Existing multimodal large language models (MLLMs) exhibit weak capabilities in interleaved image-text generation—i.e., open-domain generation where images and text alternate in output—yet current benchmarks are limited in scale and scenario diversity, hindering rigorous evaluation.
Method: We introduce OpenING, the first comprehensive benchmark for this task, comprising 5,400 human-annotated instances across 56 real-world scenarios. We formally define and quantify interleaved generation capability, develop IntJudge—a high-consistency discriminative evaluator leveraging cross-modal alignment modeling and reinforcement-based feedback training—and design a multi-stage annotation protocol with a fine-grained, decoupled evaluation framework.
Results: IntJudge achieves 82.42% human–machine agreement, outperforming GPT-4-based evaluators by 11.34%. Experiments reveal substantial deficiencies in state-of-the-art MLLMs, establishing OpenING as a reproducible, high-fidelity evaluation standard for next-generation multimodal generative models.
📝 Abstract
Multimodal Large Language Models (MLLMs) have made significant strides in visual understanding and generation tasks. However, generating interleaved image-text content remains a challenge, which requires integrated multimodal understanding and generation abilities. While the progress in unified models offers new solutions, existing benchmarks are insufficient for evaluating these methods due to limitations in data size and diversity. To bridge this gap, we introduce OpenING, a comprehensive benchmark comprising 5,400 high-quality human-annotated instances across 56 real-world tasks. OpenING covers diverse daily scenarios such as travel guide, design, and brainstorming, offering a robust platform for challenging interleaved generation methods. In addition, we present IntJudge, a judge model for evaluating open-ended multimodal generation methods. Trained with a novel data pipeline, our IntJudge achieves an agreement rate of 82.42% with human judgments, outperforming GPT-based evaluators by 11.34%. Extensive experiments on OpenING reveal that current interleaved generation methods still have substantial room for improvement. Key findings on interleaved image-text generation are further presented to guide the development of next-generation models.