Towards Understanding Subliminal Learning: When and How Hidden Biases Transfer

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the transfer of implicit bias in language model knowledge distillation, attributable to “subconscious learning.” We find that bias persists even under hard distillation using only sampled tokens, propagating via a small number of semantic discrepancy tokens—despite the absence of logit leakage or global token entanglement. This transfer occurs predominantly in early transformer layers and is highly sensitive to minor perturbations such as prompt rephrasing. Through controlled ablation experiments, layer-wise fine-tuning, and discrepancy-token masking, we establish that implicit bias transmission is localized, driven by a sparse set of critical tokens in early layers. Masking these tokens substantially suppresses bias migration, while fine-tuning just a single early layer suffices to reproduce the full bias transfer. Our results reveal the locality, earliness, and fragility of bias propagation in distillation—providing novel mechanistic insights for controllable knowledge transfer.

Technology Category

Application Category

📝 Abstract
Language models can transfer hidden biases during distillation. For example, a teacher that "likes owls" can make its student "like owls" too, even when the training data consists only of lists of numbers. This surprising phenomenon is called subliminal learning. Subliminal learning can be expected under soft distillation, where the student is trained on the teacher's full next-token distribution. But the fact that this also occurs under hard distillation-where the student only sees sampled tokens-raises a deeper question: when and how does subliminal learning actually occur? We answer this question through controlled experiments and mechanistic analysis. Our results show that subliminal learning does not need (global) token entanglement or logit leakage. Instead, it comes down to a small set of divergence tokens-rare cases where teachers with different biases would predict different tokens. Masking out these tokens mostly removes the hidden bias transfer. Mechanistically, divergence tokens reveal that early layers are critical. Surprisingly, finetuning even a single such early layer is sufficient for subliminal learning. Finally, we find that subliminal learning is fragile. Even small changes, like paraphrasing prompts, are usually sufficient to suppress it.
Problem

Research questions and friction points this paper is trying to address.

Investigating how language models transfer hidden biases during distillation
Identifying divergence tokens as key mechanism for subliminal bias transfer
Analyzing conditions and fragility of subliminal learning in distillation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identifies divergence tokens for bias transfer
Early layer finetuning enables subliminal learning
Prompt paraphrasing suppresses hidden bias transfer
🔎 Similar Papers
No similar papers found.