🤖 AI Summary
In deep Transformers, the representational capacity of self-attention matrices degrades significantly with increasing network depth, undermining model robustness and generalization. To address this, we propose Twicing Attention—a novel self-attention mechanism that pioneers the integration of nonparametric twicing into attention design. By explicitly modeling and reusing high-frequency semantic information discarded by low-pass smoothing in residual branches, it corrects attention bias in a principled manner. Theoretically, we prove that Twicing Attention mitigates representational decay. Methodologically, it builds upon non-local means filtering, incorporates residual recalibration, and performs theory-guided attention reconstruction. Extensive experiments on image classification and language modeling demonstrate consistent improvements: our method outperforms baseline Transformers across clean, noisy, and adversarial inputs—achieving simultaneous gains in accuracy and robustness—while exhibiting markedly enhanced cross-modal generalization.
📝 Abstract
Transformer-based deep learning models have achieved state-of-the-art performance across numerous language and vision tasks. While the self-attention mechanism, a core component of transformers, has proven capable of handling complex data patterns, it has been observed that the representational capacity of the attention matrix degrades significantly across transformer layers, thereby hurting its overall performance. In this work, we leverage the connection between self-attention computations and low-pass non-local means (NLM) smoothing filters and propose the Twicing Attention, a novel attention mechanism that uses kernel twicing procedure in nonparametric regression to alleviate the low-pass behavior of associated NLM smoothing with compelling theoretical guarantees and enhanced adversarial robustness. This approach enables the extraction and reuse of meaningful information retained in the residuals following the imperfect smoothing operation at each layer. Our proposed method offers two key advantages over standard self-attention: 1) a provably slower decay of representational capacity and 2) improved robustness and accuracy across various data modalities and tasks. We empirically demonstrate the performance gains of our model over baseline transformers on multiple tasks and benchmarks, including image classification and language modeling, on both clean and corrupted data.