Transformer Meets Twicing: Harnessing Unattended Residual Information

📅 2025-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In deep Transformers, the representational capacity of self-attention matrices degrades significantly with increasing network depth, undermining model robustness and generalization. To address this, we propose Twicing Attention—a novel self-attention mechanism that pioneers the integration of nonparametric twicing into attention design. By explicitly modeling and reusing high-frequency semantic information discarded by low-pass smoothing in residual branches, it corrects attention bias in a principled manner. Theoretically, we prove that Twicing Attention mitigates representational decay. Methodologically, it builds upon non-local means filtering, incorporates residual recalibration, and performs theory-guided attention reconstruction. Extensive experiments on image classification and language modeling demonstrate consistent improvements: our method outperforms baseline Transformers across clean, noisy, and adversarial inputs—achieving simultaneous gains in accuracy and robustness—while exhibiting markedly enhanced cross-modal generalization.

Technology Category

Application Category

📝 Abstract
Transformer-based deep learning models have achieved state-of-the-art performance across numerous language and vision tasks. While the self-attention mechanism, a core component of transformers, has proven capable of handling complex data patterns, it has been observed that the representational capacity of the attention matrix degrades significantly across transformer layers, thereby hurting its overall performance. In this work, we leverage the connection between self-attention computations and low-pass non-local means (NLM) smoothing filters and propose the Twicing Attention, a novel attention mechanism that uses kernel twicing procedure in nonparametric regression to alleviate the low-pass behavior of associated NLM smoothing with compelling theoretical guarantees and enhanced adversarial robustness. This approach enables the extraction and reuse of meaningful information retained in the residuals following the imperfect smoothing operation at each layer. Our proposed method offers two key advantages over standard self-attention: 1) a provably slower decay of representational capacity and 2) improved robustness and accuracy across various data modalities and tasks. We empirically demonstrate the performance gains of our model over baseline transformers on multiple tasks and benchmarks, including image classification and language modeling, on both clean and corrupted data.
Problem

Research questions and friction points this paper is trying to address.

Addresses degradation of attention matrix in transformers.
Proposes Twicing Attention to enhance representational capacity.
Improves robustness and accuracy across data modalities.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Twicing Attention mechanism enhances Transformer layers
Kernel twicing reduces low-pass behavior in attention
Improved robustness and accuracy across data modalities
🔎 Similar Papers
No similar papers found.
Laziz Abdullaev
Laziz Abdullaev
PhD Student, National University of Singapore
mathematicsmachine learning
T
Tan Nguyen
Department of Mathematics, National University of Singapore