OneDiff: A Generalist Model for Image Difference Captioning

📅 2024-07-08
🏛️ Asian Conference on Computer Vision
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing image difference captioning (IDC) methods—namely, their reliance on domain-specific expert models and poor generalization—by proposing the first general-purpose IDC framework. Methodologically, it introduces a twin-image encoder coupled with a novel visual Delta module to explicitly model fine-grained inter-image differences; additionally, it employs coupled-sample training and multi-task learning to enhance semantic alignment and robustness. Contributions include: (1) the construction of DiffCap, the first hybrid real-synthetic benchmark dataset for IDC; (2) state-of-the-art performance across multiple benchmarks—including Spot-the-Diff and Image-Editing-Request—with an average CIDEr improvement of 97 points; and (3) significantly improved cross-domain generalization and scene adaptability, establishing a scalable, general paradigm for IDC.

Technology Category

Application Category

📝 Abstract
In computer vision, Image Difference Captioning (IDC) is crucial for accurately describing variations between closely related images. Traditional IDC methods often rely on specialist models, which restrict their applicability across varied contexts. This paper introduces the OneDiff model, a novel generalist approach that utilizes a robust vision-language model architecture, integrating a siamese image encoder with a Visual Delta Module. This innovative configuration allows for the precise detection and articulation of fine-grained differences between image pairs. OneDiff is trained through a dual-phase strategy, encompassing Coupled Sample Training and multi-task learning across a diverse array of data types, supported by our newly developed DiffCap Dataset. This dataset merges real-world and synthetic data, enhancing the training process and bolstering the model's robustness. Extensive testing on diverse IDC benchmarks, such as Spot-the-Diff, Image-Editing-Request, and Birds-to-Words, shows that OneDiff consistently outperforms existing state-of-the-art models in accuracy and adaptability, achieving improvements of up to 97% CIDEr points in average. By setting a new benchmark in IDC, OneDiff paves the way for more versatile and effective applications in detecting and describing visual differences. The code, models, and data will be made publicly available.
Problem

Research questions and friction points this paper is trying to address.

Generalist model for image difference captioning across varied contexts
Precise detection of fine-grained differences between image pairs
Outperforms state-of-the-art models in accuracy and adaptability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generalist model with vision-language architecture
Dual-phase training with diverse data
Outperforms state-of-the-art in accuracy
🔎 Similar Papers
No similar papers found.
E
Erdong Hu
Institute of Automation, Chinese Academy of Sciences
L
Longteng Guo
Institute of Automation, Chinese Academy of Sciences
Tongtian Yue
Tongtian Yue
Institute of Automation, Chinese Academy of Sciences
Multimodal PretrainVisual-Language
Zijia Zhao
Zijia Zhao
Institute of Automation, Chinese Academy Sciences (CASIA)
Multimodal learning
S
Shuning Xue
Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences
J
Jing Liu
Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences