The Implicit Bias of Structured State Space Models Can Be Poisoned With Clean Labels

📅 2024-10-14
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work uncovers a critical vulnerability of Structured State Space Models (SSMs) to clean-label poisoning: even when poisoned samples are assigned correct labels by a teacher model, carefully crafted inputs can fundamentally distort SSMs’ implicit inductive biases, leading to catastrophic generalization failure. Method: Through rigorous theoretical analysis and empirical validation, we formally characterize this phenomenon across both linear and embedding-nonlinear SSM architectures, and prove—under gradient descent training—that generalization collapse occurs deterministically. Contribution/Results: Our study challenges the prevailing assumption of SSM robustness, establishing the first clean-label poisoning paradigm for SSMs. It provides the first formal evidence of widespread susceptibility to label-consistent adversarial perturbations in SSMs, offering foundational insights for assessing model trustworthiness and designing defenses in safety-critical applications.

Technology Category

Application Category

📝 Abstract
Neural networks are powered by an implicit bias: a tendency of gradient descent to fit training data in a way that generalizes to unseen data. A recent class of neural network models gaining increasing popularity is structured state space models (SSMs), regarded as an efficient alternative to transformers. Prior work argued that the implicit bias of SSMs leads to generalization in a setting where data is generated by a low dimensional teacher. In this paper, we revisit the latter setting, and formally establish a phenomenon entirely undetected by prior work on the implicit bias of SSMs. Namely, we prove that while implicit bias leads to generalization under many choices of training data, there exist special examples whose inclusion in training completely distorts the implicit bias, to a point where generalization fails. This failure occurs despite the special training examples being labeled by the teacher, i.e. having clean labels! We empirically demonstrate the phenomenon, with SSMs trained independently and as part of non-linear neural networks. In the area of adversarial machine learning, disrupting generalization with cleanly labeled training examples is known as clean-label poisoning. Given the proliferation of SSMs, particularly in large language models, we believe significant efforts should be invested in further delineating their susceptibility to clean-label poisoning, and in developing methods for overcoming this susceptibility.
Problem

Research questions and friction points this paper is trying to address.

SSMs' implicit bias disrupted by clean-label examples.
Clean-label poisoning causes SSMs to fail generalization.
SSMs' vulnerability to clean-label poisoning needs addressing.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Clean-label poisoning in SSMs
Implicit bias disruption
Generalization failure examples
🔎 Similar Papers
No similar papers found.
Y
Yonatan Slutzky
Tel Aviv University
Y
Yotam Alexander
Tel Aviv University
Noam Razin
Noam Razin
Postdoctoral Fellow, Princeton Language and Intelligence, Princeton University
Artificial IntelligenceMachine LearningDeep LearningLearning TheoryTensor Analysis
N
Nadav Cohen
Tel Aviv University