Counting Hallucinations in Diffusion Models

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the pervasive “counting hallucination” problem in diffusion probabilistic models (DPMs)—the generation of structured objects (e.g., six-fingered hands) with numerically inconsistent counts violating commonsense constraints. We introduce the first systematic framework for studying this issue. Specifically, we construct CountHalluSet, a benchmark comprising synthetic and real-world subsets (ToyShape, SimObject, RealHand), and propose a standardized evaluation protocol. We formally define “counting hallucination” and demonstrate that conventional metrics such as FID are entirely insensitive to it. Through controlled ablation studies, we quantitatively analyze how solver type, ODE order, sampling steps, and initial noise level affect hallucination rates, thereby characterizing the relationship between sampling strategies and factual consistency. Our contributions include: (1) the first rigorous conceptualization and empirical characterization of counting hallucinations in DPMs; (2) a reproducible, multi-scenario benchmark; and (3) new methodological foundations for fact-aware generative modeling and evaluation.

Technology Category

Application Category

📝 Abstract
Diffusion probabilistic models (DPMs) have demonstrated remarkable progress in generative tasks, such as image and video synthesis. However, they still often produce hallucinated samples (hallucinations) that conflict with real-world knowledge, such as generating an implausible duplicate cup floating beside another cup. Despite their prevalence, the lack of feasible methodologies for systematically quantifying such hallucinations hinders progress in addressing this challenge and obscures potential pathways for designing next-generation generative models under factual constraints. In this work, we bridge this gap by focusing on a specific form of hallucination, which we term counting hallucination, referring to the generation of an incorrect number of instances or structured objects, such as a hand image with six fingers, despite such patterns being absent from the training data. To this end, we construct a dataset suite CountHalluSet, with well-defined counting criteria, comprising ToyShape, SimObject, and RealHand. Using these datasets, we develop a standardized evaluation protocol for quantifying counting hallucinations, and systematically examine how different sampling conditions in DPMs, including solver type, ODE solver order, sampling steps, and initial noise, affect counting hallucination levels. Furthermore, we analyze their correlation with common evaluation metrics such as FID, revealing that this widely used image quality metric fails to capture counting hallucinations consistently. This work aims to take the first step toward systematically quantifying hallucinations in diffusion models and offer new insights into the investigation of hallucination phenomena in image generation.
Problem

Research questions and friction points this paper is trying to address.

Quantifying counting hallucinations in diffusion models systematically
Developing evaluation protocols for incorrect object instance generation
Analyzing sampling conditions' impact on counting hallucination levels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dataset suite CountHalluSet for counting hallucinations
Standardized evaluation protocol for quantifying hallucinations
Analyzing sampling conditions' effect on hallucination levels
🔎 Similar Papers
No similar papers found.
Shuai Fu
Shuai Fu
University of Adelaide
J
Jian Zhou
University of Adelaide
Q
Qi Chen
University of Adelaide
J
Jing Huang
Meituan
Huy Anh Nguyen
Huy Anh Nguyen
University of Adelaide
Xiaohan Liu
Xiaohan Liu
The University of Tokyo
Computer VisionGeometry ProcessingComputer Graphics
Z
Zhixiong Zeng
Meituan
L
Lin Ma
Meituan
Quanshi Zhang
Quanshi Zhang
Shanghai Jiao Tong University
Interpretable Machine Learning
Q
Qi Wu
University of Adelaide