Weakly-Supervised Image Forgery Localization via Vision-Language Collaborative Reasoning Framework

📅 2025-08-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Weakly supervised image forgery localization (WSIFL) suffers from limited localization accuracy due to reliance solely on image-level labels and the absence of external semantic guidance. Method: This paper proposes a vision–language collaborative reasoning framework that leverages a pre-trained multimodal model to extract fine-grained semantic knowledge. A contrastive patch consistency module is designed to achieve semantic alignment and feature clustering of tampered regions, while an adaptive reasoning network with dual prediction heads integrates joint vision–language supervision. Contribution/Results: Unlike existing weakly supervised approaches, the method requires no pixel-level annotations. It achieves state-of-the-art localization accuracy across multiple benchmark datasets, demonstrating significant improvements over prior work.

Technology Category

Application Category

📝 Abstract
Image forgery localization aims to precisely identify tampered regions within images, but it commonly depends on costly pixel-level annotations. To alleviate this annotation burden, weakly supervised image forgery localization (WSIFL) has emerged, yet existing methods still achieve limited localization performance as they mainly exploit intra-image consistency clues and lack external semantic guidance to compensate for weak supervision. In this paper, we propose ViLaCo, a vision-language collaborative reasoning framework that introduces auxiliary semantic supervision distilled from pre-trained vision-language models (VLMs), enabling accurate pixel-level localization using only image-level labels. Specifically, ViLaCo first incorporates semantic knowledge through a vision-language feature modeling network, which jointly extracts textual and visual priors using pre-trained VLMs. Next, an adaptive vision-language reasoning network aligns textual semantics and visual features through mutual interactions, producing semantically aligned representations. Subsequently, these representations are passed into dual prediction heads, where the coarse head performs image-level classification and the fine head generates pixel-level localization masks, thereby bridging the gap between weak supervision and fine-grained localization. Moreover, a contrastive patch consistency module is introduced to cluster tampered features while separating authentic ones, facilitating more reliable forgery discrimination. Extensive experiments on multiple public datasets demonstrate that ViLaCo substantially outperforms existing WSIFL methods, achieving state-of-the-art performance in both detection and localization accuracy.
Problem

Research questions and friction points this paper is trying to address.

Localize image forgeries with weak supervision
Enhance localization using vision-language collaboration
Bridge gap between weak supervision and fine-grained localization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses vision-language feature modeling network
Adaptive vision-language reasoning network aligns semantics
Contrastive patch consistency module enhances discrimination
🔎 Similar Papers
No similar papers found.
Z
Ziqi Sheng
School of Computer Science and Engineering, MoE Key Laboratory of Information Technology, Guangdong Province Key Laboratory of Information Security Technology, Sun Yat-sen University
Junyan Wu
Junyan Wu
Ph.D. student from School of Computer Science and Engineering, Sun Yat-sen University
multimedia forensics and security
W
Wei Lu
School of Computer Science and Engineering, MoE Key Laboratory of Information Technology, Guangdong Province Key Laboratory of Information Security Technology, Sun Yat-sen University
Jiantao Zhou
Jiantao Zhou
Professor, Department of Computer and Information Science, University of Macau
Information Forensics and SecurityMultimedia Signal ProcessingMachine Learning