AtPatch: Debugging Transformers via Hot-Fixing Over-Attention

πŸ“… 2026-01-29
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes AtPatch, a novel approach to mitigating security and fairness issues in Transformer models by addressing their anomalous attention patterns toward triggers or sensitive attributes. Unlike existing neuron-editing methods that often require parameter modification or retraining and risk degrading original model functionality, AtPatch draws inspiration from software engineering hot-patching techniques. It introduces a pre-trained anomaly detector to identify problematic attention columns during inference and dynamically replaces and redistributes the corresponding attention weights without altering model parameters. Experimental results demonstrate that AtPatch effectively alleviates backdoor and fairness-related vulnerabilities while significantly outperforming current methods in preserving the model’s original performance.

Technology Category

Application Category

πŸ“ Abstract
Transformer-based deep neural networks (DNNs) affected by backdoor attacks and unfairness typically exhibit anomalous attention patterns, leading to over-attend to backdoor triggers or protected attributes. Existing neuron-editing mitigation strategies often struggle to handle such situation and most of them lack flexibility and tend to distort feature representations. Motivated by such over-attention phenomenon and software engineering paradigms such as delta debugging and hot patching, we propose AtPatch, a hot-fix method that dynamically redistributes attention maps during model inference. Specifically, for a given input, AtPatch first extracts the attention map from the model's inference process. Then, it uses a pre-trained detector to identify anomalous columns and replace them with unified benign attention. Then, AtPatch rescales other columns to mitigate the impact of over-attention. Finally, AtPatch returns the redistributed attention map to the model for continued inference. Notably, if the detector does not report any anomalous columns, AtPatch directly returns the original attention map to the model. Unlike existing techniques, AtPatch selectively redistributes the attention map, making it better at preserving the model's original functionality. Furthermore, AtPatch's on-the-fly nature allows it to work without modifying model parameters or retraining, making it better suited for deployed models. We conducted extensive experiments to validate AtPatch. Experimental results show that, compared to existing methods, AtPatch can more effectively mitigate backdoor attacks and unfairness while better preserving the model's original functionality.
Problem

Research questions and friction points this paper is trying to address.

over-attention
backdoor attacks
unfairness
attention patterns
Transformer
Innovation

Methods, ideas, or system contributions that make the work stand out.

attention redistribution
hot-fixing
over-attention mitigation
backdoor defense
fairness-aware inference
πŸ”Ž Similar Papers
No similar papers found.
S
Shihao Weng
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
Yang Feng
Yang Feng
Nanjing University
Software Engineering
J
Jincheng Li
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
Y
Yining Yin
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
Xiaofei Xie
Xiaofei Xie
Singapore Management University
Software EngineeringLoop AnalysisTestingDeep Learning
J
Jia Liu
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China