🤖 AI Summary
Existing visual grounding (VG) methods are limited to single-image understanding, while image difference understanding (IDU) approaches lack text guidance or operate at insufficient granularity, hindering fine-grained cross-image difference perception required in real-world applications such as surveillance. To address this gap, we formally introduce the Image Difference Grounding (IDG) task: localizing semantically relevant visual change regions between an image pair guided by natural language instructions. We present DiffGround—the first high-quality, human-annotated IDG benchmark—and propose DiffTracker, a dedicated model that enables cross-image–language collaborative grounding via feature differentiation enhancement, shared-feature suppression, and multimodal alignment. Extensive experiments demonstrate that DiffTracker significantly outperforms general-purpose VG and IDU baselines. Both the DiffGround dataset and the DiffTracker code will be publicly released to advance research in multi-image vision–language understanding.
📝 Abstract
Visual grounding (VG) typically focuses on locating regions of interest within an image using natural language, and most existing VG methods are limited to single-image interpretations. This limits their applicability in real-world scenarios like automatic surveillance, where detecting subtle but meaningful visual differences across multiple images is crucial. Besides, previous work on image difference understanding (IDU) has either focused on detecting all change regions without cross-modal text guidance, or on providing coarse-grained descriptions of differences. Therefore, to push towards finer-grained vision-language perception, we propose Image Difference Grounding (IDG), a task designed to precisely localize visual differences based on user instructions. We introduce DiffGround, a large-scale and high-quality dataset for IDG, containing image pairs with diverse visual variations along with instructions querying fine-grained differences. Besides, we present a baseline model for IDG, DiffTracker, which effectively integrates feature differential enhancement and common suppression to precisely locate differences. Experiments on the DiffGround dataset highlight the importance of our IDG dataset in enabling finer-grained IDU. To foster future research, both DiffGround data and DiffTracker model will be publicly released.