Image Difference Grounding with Natural Language

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing visual grounding (VG) methods are limited to single-image understanding, while image difference understanding (IDU) approaches lack text guidance or operate at insufficient granularity, hindering fine-grained cross-image difference perception required in real-world applications such as surveillance. To address this gap, we formally introduce the Image Difference Grounding (IDG) task: localizing semantically relevant visual change regions between an image pair guided by natural language instructions. We present DiffGround—the first high-quality, human-annotated IDG benchmark—and propose DiffTracker, a dedicated model that enables cross-image–language collaborative grounding via feature differentiation enhancement, shared-feature suppression, and multimodal alignment. Extensive experiments demonstrate that DiffTracker significantly outperforms general-purpose VG and IDU baselines. Both the DiffGround dataset and the DiffTracker code will be publicly released to advance research in multi-image vision–language understanding.

Technology Category

Application Category

📝 Abstract
Visual grounding (VG) typically focuses on locating regions of interest within an image using natural language, and most existing VG methods are limited to single-image interpretations. This limits their applicability in real-world scenarios like automatic surveillance, where detecting subtle but meaningful visual differences across multiple images is crucial. Besides, previous work on image difference understanding (IDU) has either focused on detecting all change regions without cross-modal text guidance, or on providing coarse-grained descriptions of differences. Therefore, to push towards finer-grained vision-language perception, we propose Image Difference Grounding (IDG), a task designed to precisely localize visual differences based on user instructions. We introduce DiffGround, a large-scale and high-quality dataset for IDG, containing image pairs with diverse visual variations along with instructions querying fine-grained differences. Besides, we present a baseline model for IDG, DiffTracker, which effectively integrates feature differential enhancement and common suppression to precisely locate differences. Experiments on the DiffGround dataset highlight the importance of our IDG dataset in enabling finer-grained IDU. To foster future research, both DiffGround data and DiffTracker model will be publicly released.
Problem

Research questions and friction points this paper is trying to address.

Localize visual differences using natural language instructions
Detect fine-grained differences across multiple images
Improve vision-language perception for real-world applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes Image Difference Grounding (IDG) task
Introduces DiffGround dataset for fine-grained IDU
Develops DiffTracker model with differential enhancement
🔎 Similar Papers
No similar papers found.
W
Wenxuan Wang
Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences; Beijing Academy of Artificial Intelligence
Zijia Zhao
Zijia Zhao
Institute of Automation, Chinese Academy Sciences (CASIA)
Multimodal learning
Y
Yisi Zhang
University of Science and Technology Beijing
Yepeng Tang
Yepeng Tang
Beijing Jiaotong University
VideoLLMVideo Understanding
E
Erdong Hu
Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences
X
Xinlong Wang
Beijing Academy of Artificial Intelligence
J
Jing Liu
Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences