A Nested Watermark for Large Language Models

📅 2025-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the complete failure of single-key watermarking in large language model (LLM) text provenance upon key compromise, this work proposes a nested dual-key watermarking mechanism: two independent watermarks—each controlled by a separate secret key—are simultaneously embedded within a single autoregressive generation step, enabling key isolation and robust author attribution under key leakage. The method modulates token sampling probabilities via a watermarking operator and employs a statistically rigorous detection algorithm to support hierarchical authorization and dynamic accountability. Experiments across multiple LLMs and benchmark datasets demonstrate >99% detection accuracy for both watermarks, with no statistically significant degradation in text fluency—measured by perplexity (PPL) and human evaluation—relative to the watermark-free baseline. This is the first watermarking paradigm proven to reliably identify authorship even after one key has been compromised.

Technology Category

Application Category

📝 Abstract
The rapid advancement of large language models (LLMs) has raised concerns regarding their potential misuse, particularly in generating fake news and misinformation. To address these risks, watermarking techniques for autoregressive language models have emerged as a promising means for detecting LLM-generated text. Existing methods typically embed a watermark by increasing the probabilities of tokens within a group selected according to a single secret key. However, this approach suffers from a critical limitation: if the key is leaked, it becomes impossible to trace the text's provenance or attribute authorship. To overcome this vulnerability, we propose a novel nested watermarking scheme that embeds two distinct watermarks into the generated text using two independent keys. This design enables reliable authorship identification even in the event that one key is compromised. Experimental results demonstrate that our method achieves high detection accuracy for both watermarks while maintaining the fluency and overall quality of the generated text.
Problem

Research questions and friction points this paper is trying to address.

Detect LLM-generated text to prevent fake news
Overcome single-key vulnerability in watermarking techniques
Enable reliable authorship identification with nested watermarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Nested watermarking with two independent keys
High detection accuracy for both watermarks
Maintains text fluency and quality
🔎 Similar Papers
No similar papers found.
K
Koichi Nagatsuka
Hitachi, Ltd.
Terufumi Morishita
Terufumi Morishita
Central Research Laboratory of Hitachi, ltd.
NLPRLML.
Y
Yasuhiro Sogawa
Hitachi, Ltd.