HeavyWater and SimplexWater: Watermarking Low-Entropy Text Distributions

📅 2025-06-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing watermarking methods for low-entropy generation tasks—such as code generation—fail under high predictive certainty, suffering from poor detectability and significant degradation in output quality. Method: We propose the first general, tunable, and model-agnostic watermarking framework tailored for low-entropy distributions. Our approach formulates watermark embedding as a distortion–detection trade-off optimization problem; introduces hash-driven random side information integration and probability reweighting; and establishes a theoretical analysis framework grounded in coding theory. Contribution/Results: We uncover a fundamental connection between watermarking and source coding; design a lightweight, plug-and-play scheme requiring no model fine-tuning; achieve >99.9% detection accuracy with false positive rate < 0.1% on programming tasks, while inducing negligible distortion—BLEU and CodeBLEU drop by less than 0.5%. The open-source implementation has been validated by the community.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) watermarks enable authentication of text provenance, curb misuse of machine-generated text, and promote trust in AI systems. Current watermarks operate by changing the next-token predictions output by an LLM. The updated (i.e., watermarked) predictions depend on random side information produced, for example, by hashing previously generated tokens. LLM watermarking is particularly challenging in low-entropy generation tasks - such as coding - where next-token predictions are near-deterministic. In this paper, we propose an optimization framework for watermark design. Our goal is to understand how to most effectively use random side information in order to maximize the likelihood of watermark detection and minimize the distortion of generated text. Our analysis informs the design of two new watermarks: HeavyWater and SimplexWater. Both watermarks are tunable, gracefully trading-off between detection accuracy and text distortion. They can also be applied to any LLM and are agnostic to side information generation. We examine the performance of HeavyWater and SimplexWater through several benchmarks, demonstrating that they can achieve high watermark detection accuracy with minimal compromise of text generation quality, particularly in the low-entropy regime. Our theoretical analysis also reveals surprising new connections between LLM watermarking and coding theory. The code implementation can be found in https://github.com/DorTsur/HeavyWater_SimplexWater
Problem

Research questions and friction points this paper is trying to address.

Watermarking low-entropy text distributions effectively
Maximizing detection likelihood while minimizing text distortion
Designing tunable watermarks for any LLM application
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimization framework for watermark design
Tunable watermarks with detection-distortion trade-off
Applicable to any LLM, agnostic to side information
🔎 Similar Papers
No similar papers found.
Dor Tsur
Dor Tsur
Ben Gurion University
Information TheoryMachine Learning
C
Carol Xuan Long
Harvard University
C
C. M. Verdun
Harvard University
Hsiang Hsu
Hsiang Hsu
Harvard University
Information TheoryRepresentation LearningMachine Learning
C
Chen-Fu Chen
JPMorganChase Global Technology Applied Research
H
H. Permuter
Ben Gurion University
Sajani Vithana
Sajani Vithana
Harvard University
Information TheoryCoded ComputingTrustworthy ML
F
F. Calmon
Harvard University