🤖 AI Summary
This work addresses the high computational cost and limited receptive field of Transformers in image restoration, which stem from the quadratic complexity of self-attention. To overcome these limitations, the authors propose the Adaptive Token Dictionary (ATD) architecture, which introduces a learnable global token dictionary to model long-range dependencies via Token Dictionary Cross-Attention (TDCA), achieving linear complexity. The design is further enhanced by a class-aware feed-forward network and a multi-scale ATD-U structure that effectively integrates external image priors. Both ATD and its lightweight variant, ATD-Light, achieve state-of-the-art performance on multiple super-resolution benchmarks, while the ATD-U variant also demonstrates superior results in image denoising and JPEG artifact removal tasks.
📝 Abstract
Recently, Transformers have gained significant popularity in image restoration tasks such as image super-resolution and denoising, owing to their superior performance. However, balancing performance and computational burden remains a long-standing problem for transformer-based architectures. Due to the quadratic complexity of self-attention, existing methods often restrict attention to local windows, resulting in limited receptive field and suboptimal performance. To address this issue, we propose Adaptive Token Dictionary (ATD), a novel transformer-based architecture for image restoration that enables global dependency modeling with linear complexity relative to image size. The ATD model incorporates a learnable token dictionary, which summarizes external image priors (i.e., typical image structures) during the training process. To utilize this information, we introduce a token dictionary cross-attention (TDCA) mechanism that enhances the input features via interaction with the learned dictionary. Furthermore, we exploit the category information embedded in the TDCA attention maps to group input features into multiple categories, each representing a cluster of similar features across the image and serving as an attention group. We also integrate the learned category information into the feed-forward network to further improve feature fusion. ATD and its lightweight version ATD-light, achieve state-of-the-art performance on multiple image super-resolution benchmarks. Moreover, we develop ATD-U, a multi-scale variant of ATD, to address other image restoration tasks, including image denoising and JPEG compression artifacts removal. Extensive experiments demonstrate the superiority of out proposed models, both quantitatively and qualitatively.