🤖 AI Summary
Current text-to-image (T2I) models lack factual evaluation benchmarks tailored to knowledge-intensive concepts, leading to frequent factual inaccuracies in domains such as science and history.
Method: We introduce the first large-scale, fine-grained Knowledge-Intensive T2I Factuality Benchmark (KIFB), featuring a three-tiered task framework—single-concept memorization, multi-concept composition, and cross-domain reasoning—and propose a novel automated factuality assessment paradigm based on multi-round chain-of-visual-question-answering (VQA), integrating knowledge graph–driven concept sampling, structured prompt engineering, and cross-model consistency verification.
Contribution/Results: Systematic evaluation across 12 state-of-the-art T2I models reveals an average factual error rate of 41.7%–68.3%, exposing critical vulnerabilities in commonsense and domain-specific knowledge generation. KIFB establishes a new standard and methodological foundation for advancing factuality-aware T2I research.
📝 Abstract
Evaluating the quality of synthesized images remains a significant challenge in the development of text-to-image (T2I) generation. Most existing studies in this area primarily focus on evaluating text-image alignment, image quality, and object composition capabilities, with comparatively fewer studies addressing the evaluation of the factuality of T2I models, particularly when the concepts involved are knowledge-intensive. To mitigate this gap, we present T2I-FactualBench in this work - the largest benchmark to date in terms of the number of concepts and prompts specifically designed to evaluate the factuality of knowledge-intensive concept generation. T2I-FactualBench consists of a three-tiered knowledge-intensive text-to-image generation framework, ranging from the basic memorization of individual knowledge concepts to the more complex composition of multiple knowledge concepts. We further introduce a multi-round visual question answering (VQA) based evaluation framework to assess the factuality of three-tiered knowledge-intensive text-to-image generation tasks. Experiments on T2I-FactualBench indicate that current state-of-the-art (SOTA) T2I models still leave significant room for improvement.