Probe before You Talk: Towards Black-box Defense against Backdoor Unalignment for Large Language Models

📅 2025-06-19
🏛️ International Conference on Learning Representations
📈 Citations: 9
Influential: 0
📄 PDF
🤖 AI Summary
In black-box LLM-as-a-Service (LLMaaS) settings, stealthy backdoor alignment attacks—where models violate safety alignment upon inputs containing hidden triggers—are notoriously difficult to detect. Method: We propose BEAT, the first sample-agnostic, black-box detectable defense leveraging distortions in refusal signals. Its core insight is the “probe concatenation effect”: a stable, significant drop in refusal rate upon backdoor activation. Instead of analyzing output semantics, BEAT monitors the stability of safety signals via multi-sample output distribution estimation, probe concatenation perturbations, and KL-divergence–based distortion quantification—requiring neither gradients nor internal model access. Results: Evaluated on closed- and open-source models including GPT-3.5-turbo, BEAT achieves AUC > 0.96 in detecting diverse backdoor attacks and generalizes effectively against mainstream jailbreak techniques.

Technology Category

Application Category

📝 Abstract
Backdoor unalignment attacks against Large Language Models (LLMs) enable the stealthy compromise of safety alignment using a hidden trigger while evading normal safety auditing. These attacks pose significant threats to the applications of LLMs in the real-world Large Language Model as a Service (LLMaaS) setting, where the deployed model is a fully black-box system that can only interact through text. Furthermore, the sample-dependent nature of the attack target exacerbates the threat. Instead of outputting a fixed label, the backdoored LLM follows the semantics of any malicious command with the hidden trigger, significantly expanding the target space. In this paper, we introduce BEAT, a black-box defense that detects triggered samples during inference to deactivate the backdoor. It is motivated by an intriguing observation (dubbed the probe concatenate effect), where concatenated triggered samples significantly reduce the refusal rate of the backdoored LLM towards a malicious probe, while non-triggered samples have little effect. Specifically, BEAT identifies whether an input is triggered by measuring the degree of distortion in the output distribution of the probe before and after concatenation with the input. Our method addresses the challenges of sample-dependent targets from an opposite perspective. It captures the impact of the trigger on the refusal signal (which is sample-independent) instead of sample-specific successful attack behaviors. It overcomes black-box access limitations by using multiple sampling to approximate the output distribution. Extensive experiments are conducted on various backdoor attacks and LLMs (including the closed-source GPT-3.5-turbo), verifying the effectiveness and efficiency of our defense. Besides, we also preliminarily verify that BEAT can effectively defend against popular jailbreak attacks, as they can be regarded as'natural backdoors'.
Problem

Research questions and friction points this paper is trying to address.

Detect backdoor attacks in black-box LLMs
Address sample-dependent attack targets effectively
Defend against stealthy trigger-based safety compromises
Innovation

Methods, ideas, or system contributions that make the work stand out.

Detects triggered samples during inference
Measures output distribution distortion post-concatenation
Uses multiple sampling for black-box approximation
🔎 Similar Papers
No similar papers found.