HalDec-Bench: Benchmarking Hallucination Detector in Image Captioning

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models (VLMs) are prone to generating hallucinations in image captioning, yet there is a lack of systematic benchmarks to evaluate their hallucination detection capabilities—particularly regarding generalization across different models and hallucination types. To address this gap, this work proposes HalDec-Bench, the first fine-grained, multi-difficulty benchmark for hallucination detection, which integrates VLM-generated captions, human-annotated hallucination labels, type categorizations, and phrase-level alignment annotations. The benchmark not only effectively discriminates between the performance of various detectors but also reveals a positional bias in model judgments, particularly at the beginning of generated responses. Furthermore, it demonstrates that state-of-the-art VLMs can serve as effective filters to substantially improve the quality of training data.

Technology Category

Application Category

📝 Abstract
Hallucination detection in captions (HalDec) assesses a vision-language model's ability to correctly align image content with text by identifying errors in captions that misrepresent the image. Beyond evaluation, effective hallucination detection is also essential for curating high-quality image-caption pairs used to train VLMs. However, the generalizability of VLMs as hallucination detectors across different captioning models and hallucination types remains unclear due to the lack of a comprehensive benchmark. In this work, we introduce HalDec-Bench, a benchmark designed to evaluate hallucination detectors in a principled and interpretable manner. HalDec-Bench contains captions generated by diverse VLMs together with human annotations indicating the presence of hallucinations, detailed hallucination-type categories, and segment-level labels. The benchmark provides tasks with a wide range of difficulty levels and reveals performance differences across models that are not visible in existing multimodal reasoning or alignment benchmarks. Our analysis further uncovers two key findings. First, detectors tend to recognize sentences appearing at the beginning of a response as correct, regardless of their actual correctness. Second, our experiments suggest that dataset noise can be substantially reduced by using strong VLMs as filters while employing recent VLMs as caption generators. Our project page is available at https://dahlian00.github.io/HalDec-Bench-Page/.
Problem

Research questions and friction points this paper is trying to address.

hallucination detection
image captioning
vision-language models
benchmark
multimodal alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

hallucination detection
vision-language models
benchmark
image captioning
dataset curation
🔎 Similar Papers
No similar papers found.