🤖 AI Summary
To address the challenge of certifying robustness of text classifiers against Levenshtein edit distance (insertions, deletions, substitutions), this paper introduces the first certified robustness framework for such perturbations. Methodologically, it proposes: (1) the first Lipschitz constant estimation technique for convolutional text classifiers under Levenshtein distance, enabling efficient, single-forward-pass computation of certified radii; (2) a Lipschitz regularization scheme integrated with Levenshtein-aware convolutional modeling to enhance certified robustness during training; and (3) LipsLev, an efficient certification algorithm leveraging these innovations. Evaluated on AG-News, the approach achieves certified accuracies of 38.80% and 13.93% against edits of distance 1 and 2, respectively, while accelerating certification by four orders of magnitude over prior methods—overcoming the longstanding bottleneck in scalable, edit-distance-based certification.
📝 Abstract
Text classifiers suffer from small perturbations, that if chosen adversarially, can dramatically change the output of the model. Verification methods can provide robustness certificates against such adversarial perturbations, by computing a sound lower bound on the robust accuracy. Nevertheless, existing verification methods incur in prohibitive costs and cannot practically handle Levenshtein distance constraints. We propose the first method for computing the Lipschitz constant of convolutional classifiers with respect to the Levenshtein distance. We use these Lipschitz constant estimates for training 1-Lipschitz classifiers. This enables computing the certified radius of a classifier in a single forward pass. Our method, LipsLev, is able to obtain $38.80$% and $13.93$% verified accuracy at distance $1$ and $2$ respectively in the AG-News dataset, while being $4$ orders of magnitude faster than existing approaches. We believe our work can open the door to more efficient verification in the text domain.