🤖 AI Summary
Existing watermarking methods for low-entropy generation tasks—such as code generation—fail under high predictive certainty, suffering from poor detectability and significant degradation in output quality.
Method: We propose the first general, tunable, and model-agnostic watermarking framework tailored for low-entropy distributions. Our approach formulates watermark embedding as a distortion–detection trade-off optimization problem; introduces hash-driven random side information integration and probability reweighting; and establishes a theoretical analysis framework grounded in coding theory.
Contribution/Results: We uncover a fundamental connection between watermarking and source coding; design a lightweight, plug-and-play scheme requiring no model fine-tuning; achieve >99.9% detection accuracy with false positive rate < 0.1% on programming tasks, while inducing negligible distortion—BLEU and CodeBLEU drop by less than 0.5%. The open-source implementation has been validated by the community.
📝 Abstract
Large language model (LLM) watermarks enable authentication of text provenance, curb misuse of machine-generated text, and promote trust in AI systems. Current watermarks operate by changing the next-token predictions output by an LLM. The updated (i.e., watermarked) predictions depend on random side information produced, for example, by hashing previously generated tokens. LLM watermarking is particularly challenging in low-entropy generation tasks - such as coding - where next-token predictions are near-deterministic. In this paper, we propose an optimization framework for watermark design. Our goal is to understand how to most effectively use random side information in order to maximize the likelihood of watermark detection and minimize the distortion of generated text. Our analysis informs the design of two new watermarks: HeavyWater and SimplexWater. Both watermarks are tunable, gracefully trading-off between detection accuracy and text distortion. They can also be applied to any LLM and are agnostic to side information generation. We examine the performance of HeavyWater and SimplexWater through several benchmarks, demonstrating that they can achieve high watermark detection accuracy with minimal compromise of text generation quality, particularly in the low-entropy regime. Our theoretical analysis also reveals surprising new connections between LLM watermarking and coding theory. The code implementation can be found in https://github.com/DorTsur/HeavyWater_SimplexWater