Bridging Knowledge Gap Between Image Inpainting and Large-Area Visible Watermark Removal

πŸ“… 2025-04-07
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing large-area visible watermark removal methods suffer from poor background restoration and excessive reliance on high-precision watermark masks. To address these issues, this paper proposes a feature-adaptive framework built upon a pre-trained image inpainting model. Methodologically, we design a dual-branch residual background feature encoder coupled with a gated fusion module to decouple watermark and background feature representations; additionally, we introduce a coarse-grain watermark mask-guided weakly supervised training paradigm, substantially reducing dependence on accurate masks during inference. Extensive experiments on both synthetic and real-world datasets demonstrate that our approach consistently outperforms state-of-the-art methods, achieving superior and more robust performance in terms of watermark removal completeness and background reconstruction fidelity. The proposed framework establishes a new paradigm for large-area watermark removal without requiring precise mask guidance.

Technology Category

Application Category

πŸ“ Abstract
Visible watermark removal which involves watermark cleaning and background content restoration is pivotal to evaluate the resilience of watermarks. Existing deep neural network (DNN)-based models still struggle with large-area watermarks and are overly dependent on the quality of watermark mask prediction. To overcome these challenges, we introduce a novel feature adapting framework that leverages the representation modeling capacity of a pre-trained image inpainting model. Our approach bridges the knowledge gap between image inpainting and watermark removal by fusing information of the residual background content beneath watermarks into the inpainting backbone model. We establish a dual-branch system to capture and embed features from the residual background content, which are merged into intermediate features of the inpainting backbone model via gated feature fusion modules. Moreover, for relieving the dependence on high-quality watermark masks, we introduce a new training paradigm by utilizing coarse watermark masks to guide the inference process. This contributes to a visible image removal model which is insensitive to the quality of watermark mask during testing. Extensive experiments on both a large-scale synthesized dataset and a real-world dataset demonstrate that our approach significantly outperforms existing state-of-the-art methods. The source code is available in the supplementary materials.
Problem

Research questions and friction points this paper is trying to address.

Bridging knowledge gap between image inpainting and large-area watermark removal
Reducing dependence on high-quality watermark mask prediction
Restoring residual background content beneath large-area watermarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages pre-trained image inpainting model
Dual-branch system with gated feature fusion
Training with coarse masks for robustness
πŸ”Ž Similar Papers
Y
Yicheng Leng
School of Artificial Intelligence, Xidian University, Xi’an, China; School of Data Science, The Chinese University of Hong Kong, Shenzhen, China
Chaowei Fang
Chaowei Fang
Xidian University
Computer Vision
J
Junye Chen
School of Computer Science and Engineering, Research Institute of Sun Yat-sen University in Shenzhen, Sun Yat-sen University, Guangzhou, China
Yixiang Fang
Yixiang Fang
Associate Professor, The Chinese University of Hong Kong, Shenzhen
Data managementdata miningand artificial intelligence
S
Sheng Li
Afirstsoft, Shenzhen, China
G
Guanbin Li
School of Computer Science and Engineering, Research Institute of Sun Yat-sen University in Shenzhen, Sun Yat-sen University, Guangzhou, China; GuangDong Province Key Laboratory of Information Security Technology