T2I-FactualBench: Benchmarking the Factuality of Text-to-Image Models with Knowledge-Intensive Concepts

📅 2024-12-05
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Current text-to-image (T2I) models lack factual evaluation benchmarks tailored to knowledge-intensive concepts, leading to frequent factual inaccuracies in domains such as science and history. Method: We introduce the first large-scale, fine-grained Knowledge-Intensive T2I Factuality Benchmark (KIFB), featuring a three-tiered task framework—single-concept memorization, multi-concept composition, and cross-domain reasoning—and propose a novel automated factuality assessment paradigm based on multi-round chain-of-visual-question-answering (VQA), integrating knowledge graph–driven concept sampling, structured prompt engineering, and cross-model consistency verification. Contribution/Results: Systematic evaluation across 12 state-of-the-art T2I models reveals an average factual error rate of 41.7%–68.3%, exposing critical vulnerabilities in commonsense and domain-specific knowledge generation. KIFB establishes a new standard and methodological foundation for advancing factuality-aware T2I research.

Technology Category

Application Category

📝 Abstract
Evaluating the quality of synthesized images remains a significant challenge in the development of text-to-image (T2I) generation. Most existing studies in this area primarily focus on evaluating text-image alignment, image quality, and object composition capabilities, with comparatively fewer studies addressing the evaluation of the factuality of T2I models, particularly when the concepts involved are knowledge-intensive. To mitigate this gap, we present T2I-FactualBench in this work - the largest benchmark to date in terms of the number of concepts and prompts specifically designed to evaluate the factuality of knowledge-intensive concept generation. T2I-FactualBench consists of a three-tiered knowledge-intensive text-to-image generation framework, ranging from the basic memorization of individual knowledge concepts to the more complex composition of multiple knowledge concepts. We further introduce a multi-round visual question answering (VQA) based evaluation framework to assess the factuality of three-tiered knowledge-intensive text-to-image generation tasks. Experiments on T2I-FactualBench indicate that current state-of-the-art (SOTA) T2I models still leave significant room for improvement.
Problem

Research questions and friction points this paper is trying to address.

Benchmarking factuality of text-to-image models
Evaluating knowledge-intensive concept generation
Assessing multi-tiered knowledge composition accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Three-tiered knowledge-intensive T2I framework
Multi-round VQA-based evaluation framework
Largest benchmark for knowledge-intensive T2I
🔎 Similar Papers
No similar papers found.
Ziwei Huang
Ziwei Huang
Zhejiang University
Multimodal LLMsAIGC
Wanggui He
Wanggui He
Researcher, Alibaba Group
ai
Quanyu Long
Quanyu Long
Nanyang Technological University
TLNLP
Y
Yandi Wang
Zhejiang University, China
H
Haoyuan Li
Alibaba Group, China
Z
Zhelun Yu
Alibaba Group, China
Fangxun Shu
Fangxun Shu
Bytedance
Multimodal
L
Long Chan
Alibaba Group, China
H
Hao Jiang
Alibaba Group, China
Leilei Gan
Leilei Gan
Zhejiang University
NLPLLMsMultimodal LLMsAI+X
F
Fei Wu
Zhejiang University, China