Black-Box Detection of Language Model Watermarks

📅 2024-05-28
🏛️ arXiv.org
📈 Citations: 3
Influential: 1
📄 PDF
🤖 AI Summary
This paper addresses the detectability of large language model (LLM) watermarks in realistic black-box settings, where prior assumptions about watermark invisibility remain largely unvalidated. Method: We conduct the first systematic evaluation of the stealthiness of three mainstream watermarking schemes and propose the first provably effective black-box watermark detection framework. Relying solely on limited API queries—without model access, gradients, or internal parameters—it integrates the Kolmogorov–Smirnov test, likelihood ratio testing, and token-level distributional shift modeling. Contribution/Results: Our analysis reveals that existing watermark schemes are significantly more detectable than previously assumed, challenging the industry’s “stealthy deployment implies security” paradigm. We achieve high detection rates across multiple open-source LLMs. Crucially, extensive empirical testing on major closed-source APIs—including GPT-4, Claude 3, and Gemini 1.0 Pro—yields no statistically significant watermark evidence, suggesting either non-deployment or functional obsolescence of such mechanisms in current production systems.

Technology Category

Application Category

📝 Abstract
Watermarking has emerged as a promising way to detect LLM-generated text. To apply a watermark an LLM provider, given a secret key, augments generations with a signal that is later detectable by any party with the same key. Recent work has proposed three main families of watermarking schemes, two of which focus on the property of preserving the LLM distribution. This is motivated by it being a tractable proxy for maintaining LLM capabilities, but also by the idea that concealing a watermark deployment makes it harder for malicious actors to hide misuse by avoiding a certain LLM or attacking its watermark. Yet, despite much discourse around detectability, no prior work has investigated if any of these scheme families are detectable in a realistic black-box setting. We tackle this for the first time, developing rigorous statistical tests to detect the presence of all three most popular watermarking scheme families using only a limited number of black-box queries. We experimentally confirm the effectiveness of our methods on a range of schemes and a diverse set of open-source models. Our findings indicate that current watermarking schemes are more detectable than previously believed, and that obscuring the fact that a watermark was deployed may not be a viable way for providers to protect against adversaries. We further apply our methods to test for watermark presence behind the most popular public APIs: GPT4, Claude 3, Gemini 1.0 Pro, finding no strong evidence of a watermark at this point in time.
Problem

Research questions and friction points this paper is trying to address.

Detect language model watermarks
Assess watermark detectability
Develop statistical detection tests
Innovation

Methods, ideas, or system contributions that make the work stand out.

Black-box detection methods
Statistical tests for watermarks
Evaluate real-world API feasibility
🔎 Similar Papers
2024-10-02arXiv.orgCitations: 1
2024-06-17North American Chapter of the Association for Computational LinguisticsCitations: 2
Thibaud Gloaguen
Thibaud Gloaguen
Unknown affiliation
StatisticsLLM
N
Nikola Jovanovic
Department of Computer Science, ETH Zurich
Robin Staab
Robin Staab
PhD Student at ETH Zurich
Machine LearningReliability and Privacy
M
Martin T. Vechev
Department of Computer Science, ETH Zurich