DITTO: A Spoofing Attack Framework on Watermarked LLMs via Knowledge Distillation

πŸ“… 2025-10-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper exposes a fundamental security flaw in large language model (LLM) watermarking: the prevailing assumption that watermarks uniquely identify a specific model is invalid under knowledge distillation attacks. Method: The authors introduce the novel concept of β€œwatermark radioactivity,” reframing watermarks not as passive detection features but as exploitable attack vectors that can be stolen and replicated. They propose an end-to-end watermark forgery framework that distills the behavioral patterns of a trusted watermarked LLM to precisely extract and reconstruct its watermark signal, enabling malicious models to generate text bearing the target watermark. Contribution/Results: Experiments demonstrate high-fidelity watermark forgery across multiple state-of-the-art watermarked LLMs; forged outputs evade existing watermark detectors, leading to erroneous attribution of harmful content. The implementation is publicly released to foster research on robust provenance mechanisms.

Technology Category

Application Category

πŸ“ Abstract
The promise of LLM watermarking rests on a core assumption that a specific watermark proves authorship by a specific model. We demonstrate that this assumption is dangerously flawed. We introduce the threat of watermark spoofing, a sophisticated attack that allows a malicious model to generate text containing the authentic-looking watermark of a trusted, victim model. This enables the seamless misattribution of harmful content, such as disinformation, to reputable sources. The key to our attack is repurposing watermark radioactivity, the unintended inheritance of data patterns during fine-tuning, from a discoverable trait into an attack vector. By distilling knowledge from a watermarked teacher model, our framework allows an attacker to steal and replicate the watermarking signal of the victim model. This work reveals a critical security gap in text authorship verification and calls for a paradigm shift towards technologies capable of distinguishing authentic watermarks from expertly imitated ones. Our code is available at https://github.com/hsannn/ditto.git.
Problem

Research questions and friction points this paper is trying to address.

Exposes security flaws in LLM watermarking for authorship verification
Demonstrates watermark spoofing attacks via knowledge distillation techniques
Reveals risks of misattributing harmful content to trusted sources
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge distillation steals watermarks from teacher models
Repurposes watermark radioactivity as attack vector
Enables watermark spoofing for misattribution attacks
H
Hyeseon Ahn
Yonsei University, Seoul, Republic of Korea
S
Shinwoo Park
Yonsei University, Seoul, Republic of Korea
Yo-Sub Han
Yo-Sub Han
School of Computing, Yonsei University
automata theoryformal languagesalgorithm designinformation retrieval