🤖 AI Summary
Existing neuron-level interpretability methods are often task-specific, require retraining, or are merely descriptive, making systematic evaluation of Transformer internal robustness challenging. This work proposes SYNAPSE, a framework that enables training-free, cross-architecture, and cross-domain neuron analysis and perturbation for the first time. By extracting [CLS] representations from each layer, SYNAPSE trains lightweight linear probes to rank neurons globally and per class, and applies structured perturbations during inference via forward hooks. Experiments reveal that task-relevant information is encoded by broadly overlapping and functionally redundant neuron subsets, alongside category-asymmetric specialization patterns. Notably, minimal perturbations can drastically alter model predictions, exposing inherent vulnerabilities and offering new insights into the internal mechanisms of Transformers.
📝 Abstract
In recent years, Artificial Intelligence has become a powerful partner for complex tasks such as data analysis, prediction, and problem-solving, yet its lack of transparency raises concerns about its reliability. In sensitive domains such as healthcare or cybersecurity, ensuring transparency, trustworthiness, and robustness is essential, since the consequences of wrong decisions or successful attacks can be severe. Prior neuron-level interpretability approaches are primarily descriptive, task-dependent, or require retraining, which limits their use as systematic, reusable tools for evaluating internal robustness across architectures and domains. To overcome these limitations, this work proposes SYNAPSE, a systematic, training-free framework for understanding and stress-testing the internal behavior of Transformer models across domains. It extracts per-layer [CLS] representations, trains a lightweight linear probe to obtain global and per-class neuron rankings, and applies forward-hook interventions during inference. This design enables controlled experiments on internal representations without altering the original model, thereby allowing weaknesses, stability patterns, and label-specific sensitivities to be measured and compared directly across tasks and architectures. Across all experiments, SYNAPSE reveals a consistent, domain-independent organization of internal representations, in which task-relevant information is encoded in broad, overlapping neuron subsets. This redundancy provides a strong degree of functional stability, while class-wise asymmetries expose heterogeneous specialization patterns and enable label-aware analysis. In contrast, small structured manipulations in weight or logit space are sufficient to redirect predictions, highlighting complementary vulnerability profiles and illustrating how SYNAPSE can guide the development of more robust Transformer models.