Contrast-Prior Enhanced Duality for Mask-Free Shadow Removal

📅 2025-07-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing shadow removal methods heavily rely on manually annotated or estimated shadow masks, limiting their practical applicability. To address the ambiguity between local contrast cues and low-albedo objects/textures in complex scenes, we propose a mask-free shadow removal framework featuring an Adaptive Gated Dual-branch Attention (AGBA) mechanism that dynamically recalibrates contrast signals. Furthermore, we introduce a Diffusion-based Frequency-domain–Contrast Fusion Network (FCFN), which jointly models global structural information in the frequency domain and local spatial details, enabling soft boundary preservation and faithful texture recovery. To our knowledge, this is the first work to holistically integrate contrast priors, dual-branch attention, frequency-domain feature fusion, and diffusion-inspired generation within a mask-free paradigm. Our method achieves state-of-the-art performance without shadow masks—matching the quality of mask-guided approaches—while significantly improving robustness and visual fidelity in challenging shadow scenarios.

Technology Category

Application Category

📝 Abstract
Existing shadow removal methods often rely on shadow masks, which are challenging to acquire in real-world scenarios. Exploring intrinsic image cues, such as local contrast information, presents a potential alternative for guiding shadow removal in the absence of explicit masks. However, the cue's inherent ambiguity becomes a critical limitation in complex scenes, where it can fail to distinguish true shadows from low-reflectance objects and intricate background textures. To address this motivation, we propose the Adaptive Gated Dual-Branch Attention (AGBA) mechanism. AGBA dynamically filters and re-weighs the contrast prior to effectively disentangle shadow features from confounding visual elements. Furthermore, to tackle the persistent challenge of restoring soft shadow boundaries and fine-grained details, we introduce a diffusion-based Frequency-Contrast Fusion Network (FCFN) that leverages high-frequency and contrast cues to guide the generative process. Extensive experiments demonstrate that our method achieves state-of-the-art results among mask-free approaches while maintaining competitive performance relative to mask-based methods.
Problem

Research questions and friction points this paper is trying to address.

Removing shadows without relying on shadow masks
Distinguishing shadows from low-reflectance objects and textures
Restoring soft shadow boundaries and fine-grained details
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Gated Dual-Branch Attention mechanism
Diffusion-based Frequency-Contrast Fusion Network
Dynamic filtering and re-weighting contrast prior
🔎 Similar Papers
No similar papers found.
J
Jiyu Wu
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
Y
Yifan Liu
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
J
Jiancheng Huang
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
Mingfu Yan
Mingfu Yan
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
AIGC
S
Shifeng Chen
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.