🤖 AI Summary
This work addresses the security threat posed by Hidden State Poisoning Attacks (HiSPA) against state space models such as Mamba and proposes the first general-purpose, low-overhead defense mechanism. The approach formulates HiSPA detection as a token-level binary classification problem, leveraging discriminative features from the Block Output Embeddings (BOE) of Mamba blocks in conjunction with a lightweight XGBoost classifier to construct a downstream task-agnostic pre-defense module. Experimental results demonstrate that the method achieves 95.9% token-level F1 and 99.3% document-level F1 on a dataset of 2,483 resumes comprising 9.5 million tokens. It maintains robust generalization against unseen attack triggers, attaining an average document-level F1 of 91.6%, while operating at an inference speed of 1,032 tokens per second with less than 4 GB of GPU memory, confirming its practical deployability.
📝 Abstract
State space models (SSMs) like Mamba have gained significant traction as efficient alternatives to Transformers, achieving linear complexity while maintaining competitive performance. However, Hidden State Poisoning Attacks (HiSPAs), a recently discovered vulnerability that corrupts SSM memory through adversarial strings, pose a critical threat to these architectures and their hybrid variants. Framing the HiSPA mitigation task as a binary classification problem at the token level, we introduce the CLASP model to defend against this threat. CLASP exploits distinct patterns in Mamba's block output embeddings (BOEs) and uses an XGBoost classifier to identify malicious tokens with minimal computational overhead. We consider a realistic scenario in which both SSMs and HiSPAs are likely to be used: an LLM screening résumés to identify the best candidates for a role. Evaluated on a corpus of 2,483 résumés totaling 9.5M tokens with controlled injections, CLASP achieves 95.9% token-level F1 score and 99.3% document-level F1 score on malicious tokens detection. Crucially, the model generalizes to unseen attack patterns: under leave-one-out cross-validation, performance remains high (96.9% document-level F1), while under clustered cross-validation with structurally novel triggers, it maintains useful detection capability (91.6% average document-level F1). Operating independently of any downstream model, CLASP processes 1,032 tokens per second with under 4GB VRAM consumption, potentially making it suitable for real-world deployment as a lightweight front-line defense for SSM-based and hybrid architectures. All code and detailed results are available at https://anonymous.4open.science/r/hispikes-91C0.