Reframing Image Difference Captioning with BLIP2IDC and Synthetic Augmentation

📅 2024-12-20
🏛️ IEEE Workshop/Winter Conference on Applications of Computer Vision
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
The Image Difference Captioning (IDC) task suffers from limited performance on real-world images, primarily due to scarce annotated data and the difficulty of modeling fine-grained semantic discrepancies. To address these challenges, we propose BLIP2IDC—a lightweight, single-stream adapter framework built upon BLIP-2 that enables parameter-efficient fine-tuning and joint dual-image encoding. Furthermore, we introduce a controllable image-editing-driven synthetic augmentation paradigm to construct Syned, a challenging new benchmark explicitly designed for complex real-world differences. Experimental results demonstrate that BLIP2IDC significantly outperforms state-of-the-art two-stream methods and achieves superior performance on real IDC benchmarks. Syned establishes the first evaluation standard targeting intricate, realistic image disparities, and the synthetically augmented data substantially enhances the generalization capability of existing models.

Technology Category

Application Category

📝 Abstract
The rise of the generative models quality during the past years enabled the generation of edited variations of images at an important scale. To counter the harmful effects of such technology, the Image Difference Captioning (IDC) task aims to describe the differences between two images. While this task is successfully handled for simple 3D rendered images, it struggles on real-world images. The reason is twofold: the training data-scarcity, and the difficulty to capture fine-grained differences between complex images. To address those issues, we propose in this paper a simple yet effective framework to both adapt existing image captioning models to the IDC task and augment IDC datasets. We introduce BLIP2IDC, an adaptation of BLIP2 to the IDC task at low computational cost, and show it outperforms two-streams approaches by a significant margin on real-world IDC datasets. We also propose to use synthetic augmentation to improve the performance of IDC models in an agnostic fashion. We show that our synthetic augmentation strategy provides high quality data, leading to a challenging new dataset well-suited for IDC named Syned 11The code, weights and dataset are available at https://github.com/gautierevn/BLIP2IDC.
Problem

Research questions and friction points this paper is trying to address.

Describing fine-grained differences between real-world images
Addressing training data scarcity in image difference captioning
Adapting image captioning models for difference detection tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapted BLIP2 model for image difference captioning
Used synthetic augmentation to address data scarcity
Created challenging Syned1 dataset for IDC evaluation
🔎 Similar Papers
No similar papers found.
G
Gautier Evennou
IMATAG, France
Antoine Chaffin
Antoine Chaffin
LightOn
Intelligence Artificielle
V
Vivien Chappelier
IMATAG, France
E
Ewa Kijak
IRISA, CNRS, France