🤖 AI Summary
Text-to-image models exhibit weak multimodal reasoning capabilities when generating knowledge-intensive images (e.g., charts, mind maps), hindering accurate semantic and structural fidelity.
Method: We introduce “knowledge image generation” as a novel task and establish MMMG—the first expert-validated, cross-disciplinary benchmark comprising 4,456 image-text pairs spanning 10 subjects and 6 educational stages—where image semantics are uniformly represented via knowledge graphs (KGs). We propose KG-driven MMMG-Score, a novel evaluation metric integrating graph edit distance and visual clarity to quantify reasoning quality. Our joint architecture synergizes KG modeling, diffusion-based generation, and reasoning-oriented large language models (e.g., GPT-4o, FLUX-Reason).
Contribution/Results: We open-source FLUX-Reason, a baseline achieving 34.45 MMMG-Score. Comprehensive evaluation of 16 SOTA models reveals pervasive reasoning deficits (e.g., GPT-4o scores only 50.20), underscoring the need for interpretable, knowledge-grounded image generation.
📝 Abstract
In this paper, we introduce knowledge image generation as a new task, alongside the Massive Multi-Discipline Multi-Tier Knowledge-Image Generation Benchmark (MMMG) to probe the reasoning capability of image generation models. Knowledge images have been central to human civilization and to the mechanisms of human learning--a fact underscored by dual-coding theory and the picture-superiority effect. Generating such images is challenging, demanding multimodal reasoning that fuses world knowledge with pixel-level grounding into clear explanatory visuals. To enable comprehensive evaluation, MMMG offers 4,456 expert-validated (knowledge) image-prompt pairs spanning 10 disciplines, 6 educational levels, and diverse knowledge formats such as charts, diagrams, and mind maps. To eliminate confounding complexity during evaluation, we adopt a unified Knowledge Graph (KG) representation. Each KG explicitly delineates a target image's core entities and their dependencies. We further introduce MMMG-Score to evaluate generated knowledge images. This metric combines factual fidelity, measured by graph-edit distance between KGs, with visual clarity assessment. Comprehensive evaluations of 16 state-of-the-art text-to-image generation models expose serious reasoning deficits--low entity fidelity, weak relations, and clutter--with GPT-4o achieving an MMMG-Score of only 50.20, underscoring the benchmark's difficulty. To spur further progress, we release FLUX-Reason (MMMG-Score of 34.45), an effective and open baseline that combines a reasoning LLM with diffusion models and is trained on 16,000 curated knowledge image-prompt pairs.