Unelicitable Backdoors in Language Models via Cryptographic Transformer Circuits

📅 2024-06-03
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a novel class of “unelicitable” backdoors—malicious behaviors that cannot be triggered or detected via conventional red-teaming, formal verification, or white-box analysis. Unlike traditional elicitable backdoors, these operate by embedding implicit cryptographic circuits within Transformer sublayers, leveraging cryptographic hashing and key derivation to activate exclusively upon receipt of a specific secret key input. The design formally defines and realizes unelicitable backdoors for the first time. Experiments demonstrate strong robustness against prevalent mitigation techniques—including neuron pruning and gradient masking—and significantly higher white-box detection difficulty compared to existing backdoors. These findings fundamentally challenge the efficacy of current AI security deployment practices and establish a new benchmark for backdoor modeling and defense evaluation.

Technology Category

Application Category

📝 Abstract
The rapid proliferation of open-source language models significantly increases the risks of downstream backdoor attacks. These backdoors can introduce dangerous behaviours during model deployment and can evade detection by conventional cybersecurity monitoring systems. In this paper, we introduce a novel class of backdoors in transformer models, that, in contrast to prior art, are unelicitable in nature. Unelicitability prevents the defender from triggering the backdoor, making it impossible to properly evaluate ahead of deployment even if given full white-box access and using automated techniques, such as red-teaming or certain formal verification methods. We show that our novel construction is not only unelicitable thanks to using cryptographic techniques, but also has favourable robustness properties. We confirm these properties in empirical investigations, and provide evidence that our backdoors can withstand state-of-the-art mitigation strategies. Additionally, we expand on previous work by showing that our universal backdoors, while not completely undetectable in white-box settings, can be harder to detect than some existing designs. By demonstrating the feasibility of seamlessly integrating backdoors into transformer models, this paper fundamentally questions the efficacy of pre-deployment detection strategies. This offers new insights into the offence-defence balance in AI safety and security.
Problem

Research questions and friction points this paper is trying to address.

Cryptography
Backdoor
AI Security
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cryptography Transformer Circuit
Backdoor Attack
AI Security Challenge
A
Andis Draguns
IMCS UL
A
Andrew Gritsevskiy
Cavendish Labs
S
Sumeet Ramesh Motwani
University of California, Berkeley
C
Charlie Rogers-Smith
Palisade Research
J
Jeffrey Ladish
Palisade Research
Christian Schroeder de Witt
Christian Schroeder de Witt
University of Oxford
Multi-agent LearningSecuritySafety