Linearly Decoding Refused Knowledge in Aligned Language Models

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether rejected harmful knowledge—such as stereotypical statistical facts—persists in linearly decodable form within the hidden representations of instruction-tuned, alignment-trained language models, and whether such residual knowledge influences downstream generation behavior. We employ cross-model transferable linear probing, quantify probe accuracy via Pearson correlation, and validate behavioral impact through controlled generation comparison tasks. Results show that although alignment mechanisms significantly suppress—but do not eliminate—rejected knowledge, it remains highly recoverable (r > 0.8) from single-layer hidden states across major aligned models (e.g., Llama-2-Chat, Qwen-1.5-Chat). Moreover, probe scores strongly correlate with actual model generation tendencies toward the rejected content. This work provides the first systematic evidence that current alignment paradigms primarily achieve *behavioral suppression* rather than *representational erasure*, thereby revealing a fundamental limitation and potential risk in alignment methodology.

Technology Category

Application Category

📝 Abstract
Most commonly used language models (LMs) are instruction-tuned and aligned using a combination of fine-tuning and reinforcement learning, causing them to refuse users requests deemed harmful by the model. However, jailbreak prompts can often bypass these refusal mechanisms and elicit harmful responses. In this work, we study the extent to which information accessed via jailbreak prompts is decodable using linear probes trained on LM hidden states. We show that a great deal of initially refused information is linearly decodable. For example, across models, the response of a jailbroken LM for the average IQ of a country can be predicted by a linear probe with Pearson correlations exceeding $0.8$. Surprisingly, we find that probes trained on base models (which do not refuse) sometimes transfer to their instruction-tuned versions and are capable of revealing information that jailbreaks decode generatively, suggesting that the internal representations of many refused properties persist from base LMs through instruction-tuning. Importantly, we show that this information is not merely "leftover" in instruction-tuned models, but is actively used by them: we find that probe-predicted values correlate with LM generated pairwise comparisons, indicating that the information decoded by our probes align with suppressed generative behavior that may be expressed more subtly in other downstream tasks. Overall, our results suggest that instruction-tuning does not wholly eliminate or even relocate harmful information in representation space-they merely suppress its direct expression, leaving it both linearly accessible and indirectly influential in downstream behavior.
Problem

Research questions and friction points this paper is trying to address.

Study linear decoding of refused knowledge in aligned LMs
Investigate transfer of base model probes to instruction-tuned LMs
Assess persistence and influence of harmful information post-alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Linear probes decode refused knowledge in LMs
Probes transfer from base to instruction-tuned models
Suppressed information remains influential in behavior
🔎 Similar Papers
No similar papers found.