Toward Robust Hyper-Detailed Image Captioning: A Multiagent Approach and Dual Evaluation Metrics for Factuality and Coverage

📅 2024-12-20
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language models (MLLMs) suffer from hallucination when generating ultra-fine-grained image descriptions, while conventional evaluation metrics (e.g., BLEU, CIDEr) inadequately assess factual consistency and coverage. To address this, we propose a multi-agent collaborative correction framework wherein an LLM and an MLLM cooperate synergistically: the MLLM generates initial descriptions, while the LLM refines them iteratively, guided by a vision-language alignment–based factual verification module and a coverage quantification model. Methodologically, we introduce a novel decoupled two-dimensional evaluation paradigm—separately measuring factuality and coverage—and release the first dedicated benchmark for fine-grained image description. Experiments demonstrate substantial improvements in factual accuracy for models like GPT-4V; our new metrics achieve significantly higher correlation with human judgments than traditional ones; and our approach attains state-of-the-art performance on the proposed benchmark.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) excel at generating highly detailed captions but often produce hallucinations. Our analysis reveals that existing hallucination detection methods struggle with detailed captions. We attribute this to the increasing reliance of MLLMs on their generated text, rather than the input image, as the sequence length grows. To address this issue, we propose a multiagent approach that leverages LLM-MLLM collaboration to correct given captions. Additionally, we introduce an evaluation framework and a benchmark dataset to facilitate the systematic analysis of detailed captions. Our experiments demonstrate that our proposed evaluation method better aligns with human judgments of factuality than existing metrics and that existing approaches to improve the MLLM factuality may fall short in hyper-detailed image captioning tasks. In contrast, our proposed method significantly enhances the factual accuracy of captions, even improving those generated by GPT-4V. Finally, we highlight a limitation of VQA-centric benchmarking by demonstrating that an MLLM's performance on VQA benchmarks may not correlate with its ability to generate detailed image captions.
Problem

Research questions and friction points this paper is trying to address.

Addressing hallucinations in hyper-detailed image captions by MLLMs
Proposing multiagent LLM-MLLM collaboration for caption correction
Introducing evaluation metrics for factuality and coverage in captions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multiagent LLM-MLLM collaboration for caption correction
Dual evaluation metrics for factuality and coverage
Benchmark dataset for detailed caption analysis
🔎 Similar Papers
No similar papers found.