🤖 AI Summary
Current evaluation of LVLM-generated image captions lacks fine-grained, interpretable benchmarks. To address this, we propose CompreCap—the first decomposable benchmark for comprehensive image captioning—introducing directed scene graphs to explicitly model the compositional structure of images, integrated with semantic segmentation masks and object-attribute annotations. This enables automated, three-dimensional evaluation: object coverage, attribute accuracy, and relational plausibility. Leveraging a human-in-the-loop, fine-grained annotation protocol, we construct a high-quality dataset. Extensive experiments on CompreCap demonstrate strong agreement between our automatic metrics and human judgments (Pearson *r* > 0.92). CompreCap is both interpretable—by exposing evaluation signals at the object, attribute, and relation levels—and extensible—supporting modular integration of new components or evaluation dimensions. It establishes a novel paradigm for assessing the fine-grained visual understanding capabilities of LVLMs.
📝 Abstract
Generating detailed captions comprehending text-rich visual content in images has received growing attention for Large Vision-Language Models (LVLMs). However, few studies have developed benchmarks specifically tailored for detailed captions to measure their accuracy and comprehensiveness. In this paper, we introduce a detailed caption benchmark, termed as CompreCap, to evaluate the visual context from a directed scene graph view. Concretely, we first manually segment the image into semantically meaningful regions (i.e., semantic segmentation mask) according to common-object vocabulary, while also distinguishing attributes of objects within all those regions. Then directional relation labels of these objects are annotated to compose a directed scene graph that can well encode rich compositional information of the image. Based on our directed scene graph, we develop a pipeline to assess the generated detailed captions from LVLMs on multiple levels, including the object-level coverage, the accuracy of attribute descriptions, the score of key relationships, etc. Experimental results on the CompreCap dataset confirm that our evaluation method aligns closely with human evaluation scores across LVLMs.