Rethinking Transformer-Based Blind-Spot Network for Self-Supervised Image Denoising

📅 2024-04-11
🏛️ AAAI Conference on Artificial Intelligence
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of enforcing blind-spot constraints in self-supervised image denoising (SSID) with Transformers, this paper introduces the first vision Transformer architecture strictly adhering to the blind-spot principle. The method comprises three key innovations: (1) grouped channel-wise self-attention to prevent information leakage caused by downsampling; (2) masked window self-attention that emulates the receptive field of dilated convolutions, ensuring both spatial locality and blind-spot consistency; and (3) joint knowledge distillation and multi-scale feature modeling to enhance computational efficiency and robustness. Evaluated on real-image denoising benchmarks, the proposed approach significantly outperforms existing unsupervised and self-supervised methods. It simultaneously expands the effective receptive field while preserving fine-grained detail recovery and global structural modeling capabilities. This work establishes a novel paradigm for integrating blind-spot learning with Transformer-based architectures.

Technology Category

Application Category

📝 Abstract
Blind-spot networks (BSN) have been prevalent neural architectures in self-supervised image denoising (SSID). However, most existing BSNs are conducted with convolution layers. Although transformers have shown the potential to overcome the limitations of convolutions in many image restoration tasks, the attention mechanisms may violate the blind-spot requirement, thereby restricting their applicability in BSN. To this end, we propose to analyze and redesign the channel and spatial attentions to meet the blind-spot requirement. Specifically, channel self-attention may leak the blind-spot information in multi-scale architectures, since the downsampling shuffles the spatial feature into channel dimensions. To alleviate this problem, we divide the channel into several groups and perform channel attention separately. For spatial self-attention, we apply an elaborate mask to the attention matrix to restrict and mimic the receptive field of dilated convolution. Based on the redesigned channel and window attentions, we build a Transformer-based Blind-Spot Network (TBSN), which shows strong local fitting and global perspective abilities. Furthermore, we introduce a knowledge distillation strategy that distills TBSN into smaller denoisers to improve computational efficiency while maintaining performance. Extensive experiments on real-world image denoising datasets show that TBSN largely extends the receptive field and exhibits favorable performance against state-of-the-art SSID methods.
Problem

Research questions and friction points this paper is trying to address.

Redesigning attention mechanisms for blind-spot compliance
Preventing information leakage in transformer-based denoising networks
Maintaining blind-spot requirements while using self-attention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Redesigned channel attention with grouped processing
Applied masked spatial attention mimicking dilated convolution
Introduced knowledge distillation for efficient smaller denoisers