Phantom Transfer: Data-level Defences are Insufficient Against Data Poisoning

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing data-level defenses struggle to mitigate sophisticated poisoning attacks, often failing to effectively remove malicious samples even when they are known. This work proposes “Phantom Transfer,” a novel attack that integrates an enhanced form of latent learning with guided vector techniques to stealthily manipulate model behavior. The method exhibits strong cross-model transferability and robustness against data rewriting—even when defenders possess full knowledge of the attack mechanism and completely sanitize the training data, the attack remains effective. Extensive experiments on mainstream models, including GPT-4.1, demonstrate its potency, exposing fundamental limitations in current data-cleaning and filtering defenses. These findings underscore the urgent need to shift the security paradigm toward model auditing and white-box analysis approaches.

Technology Category

Application Category

📝 Abstract
We present a data poisoning attack -- Phantom Transfer -- with the property that, even if you know precisely how the poison was placed into an otherwise benign dataset, you cannot filter it out. We achieve this by modifying subliminal learning to work in real-world contexts and demonstrate that the attack works across models, including GPT-4.1. Indeed, even fully paraphrasing every sample in the dataset using a different model does not stop the attack. We also discuss connections to steering vectors and show that one can plant password-triggered behaviours into models while still beating defences. This suggests that data-level defences are insufficient for stopping sophisticated data poisoning attacks. We suggest that future work should focus on model audits and white-box security methods.
Problem

Research questions and friction points this paper is trying to address.

data poisoning
Phantom Transfer
data-level defences
model security
subliminal learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Phantom Transfer
data poisoning
subliminal learning
steering vectors
model security
🔎 Similar Papers
No similar papers found.
Andrew Draganov
Andrew Draganov
Unknown affiliation
Machine LearningDimensionality ReductionAI Safety
T
Tolga H. Dur
LASR Labs, London
A
Anandmayi Bhongade
LASR Labs, London
M
Mary Phuong
Google Deepmind, London