๐ค AI Summary
Existing watermarking methods for large language models (LLMs) often induce distributional shifts in generated text, degrading linguistic quality. To address this, we propose a novel green/red-list watermarking framework grounded in the theory of maximal coupling. Our approach introduces maximal coupling into watermark design for the first time, enabling unbiased probabilistic correction via uniform coin flipping; the correction decision is implicitly encoded as a pseudorandom watermark signal. Crucially, this ensures statistically unbiased detection while preserving human-level fluency and coherenceโwithout compromising generation quality. Experimental results demonstrate that our method achieves significantly higher detection accuracy than state-of-the-art baselines and exhibits strong robustness against adversarial attacks, including paraphrasing and synonym substitution.
๐ Abstract
Watermarking language models is essential for distinguishing between human and machine-generated text and thus maintaining the integrity and trustworthiness of digital communication. We present a novel green/red list watermarking approach that partitions the token set into ``green'' and ``red'' lists, subtly increasing the generation probability for green tokens. To correct token distribution bias, our method employs maximal coupling, using a uniform coin flip to decide whether to apply bias correction, with the result embedded as a pseudorandom watermark signal. Theoretical analysis confirms this approach's unbiased nature and robust detection capabilities. Experimental results show that it outperforms prior techniques by preserving text quality while maintaining high detectability, and it demonstrates resilience to targeted modifications aimed at improving text quality. This research provides a promising watermarking solution for language models, balancing effective detection with minimal impact on text quality.