๐ค AI Summary
Evaluating the robustness of large language models (LLMs) against input perturbations remains challenging due to the lack of scalable, general-purpose assessment methods and overreliance on complex, sample-specific adversarial examples. To address this, we propose a unified local robustness analysis framework that requires no model parameter modification and imposes no dependence on input-specific perturbations. Our core innovation lies in constructing a graph-structured manifold mapping between input and output spaces and introducing the Distance Mapping Distortion (DMD) metricโa theoretically grounded, near-linear-complexity measure for sample-level stability quantification. The framework ensures both analytical interpretability and computational efficiency. Empirical evaluation across multiple Transformer architectures of varying scales demonstrates substantial improvements in adversarial attack efficacy and robustness-aware training performance, while confirming the methodโs validity, cross-scale generality, and scalability.
๐ Abstract
Recent strides in pretrained transformer-based language models have propelled state-of-the-art performance in numerous NLP tasks. Yet, as these models grow in size and deployment, their robustness under input perturbations becomes an increasingly urgent question. Existing robustness methods often diverge between small-parameter and large-scale models (LLMs), and they typically rely on labor-intensive, sample-specific adversarial designs. In this paper, we propose a unified, local (sample-level) robustness framework (SALMAN) that evaluates model stability without modifying internal parameters or resorting to complex perturbation heuristics. Central to our approach is a novel Distance Mapping Distortion (DMD) measure, which ranks each sample's susceptibility by comparing input-to-output distance mappings in a near-linear complexity manner. By demonstrating significant gains in attack efficiency and robust training, we position our framework as a practical, model-agnostic tool for advancing the reliability of transformer-based NLP systems.