🤖 AI Summary
In single-image reflection removal, existing dual-stream deep methods suffer from two key limitations: progressive loss of high-level semantic information across layers and rigid cross-stream interaction patterns. To address these, we propose the Reversible Decoupling Network (RDNet). First, we design a reversible encoder that losslessly preserves critical information during forward propagation while enabling layer-adaptive decoupling of transmission and reflection features. Second, we introduce a transmittance-aware prompt generator to dynamically calibrate dual-stream features. Third, we replace fixed interaction paradigms with an information-bottleneck-driven deep supervision architecture for adaptive feature refinement. RDNet achieves state-of-the-art performance across five mainstream benchmarks, with significant improvements in PSNR and SSIM. The source code will be made publicly available.
📝 Abstract
Recent deep-learning-based approaches to single-image reflection removal have shown promising advances, primarily for two reasons: 1) the utilization of recognition-pretrained features as inputs, and 2) the design of dual-stream interaction networks. However, according to the Information Bottleneck principle, high-level semantic clues tend to be compressed or discarded during layer-by-layer propagation. Additionally, interactions in dual-stream networks follow a fixed pattern across different layers, limiting overall performance. To address these limitations, we propose a novel architecture called Reversible Decoupling Network (RDNet), which employs a reversible encoder to secure valuable information while flexibly decoupling transmission- and reflection-relevant features during the forward pass. Furthermore, we customize a transmission-rate-aware prompt generator to dynamically calibrate features, further boosting performance. Extensive experiments demonstrate the superiority of RDNet over existing SOTA methods on five widely-adopted benchmark datasets. Our code will be made publicly available.