Watermarking Language Models with Error Correcting Codes

📅 2024-06-12
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the traceability of text generated by large language models (LLMs). We propose Robust Binary Coding (RBC), a watermarking framework grounded in error-correcting code theory, which losslessly embeds lightweight statistical signals into the LLM’s output token probability distribution. The method preserves text quality and human perceptibility while enabling reliable machine-generated content identification. Crucially, RBC establishes an information-theoretically modelable and statistically testable watermarking mechanism—supporting likelihood ratio tests and calibrated p-value generation. Experiments demonstrate that RBC achieves high detection accuracy, low false-positive rates, and millisecond-scale inference time across both base and instruction-tuned LLMs. Moreover, it exhibits superior robustness against common perturbations—including editing, truncation, and translation—outperforming state-of-the-art watermarking approaches.

Technology Category

Application Category

📝 Abstract
Recent progress in large language models enables the creation of realistic machine-generated content. Watermarking is a promising approach to distinguish machine-generated text from human text, embedding statistical signals in the output that are ideally undetectable to humans. We propose a watermarking framework that encodes such signals through an error correcting code. Our method, termed robust binary code (RBC) watermark, introduces no distortion compared to the original probability distribution, and no noticeable degradation in quality. We evaluate our watermark on base and instruction fine-tuned models and find our watermark is robust to edits, deletions, and translations. We provide an information-theoretic perspective on watermarking, a powerful statistical test for detection and for generating p-values, and theoretical guarantees. Our empirical findings suggest our watermark is fast, powerful, and robust, comparing favorably to the state-of-the-art.
Problem

Research questions and friction points this paper is trying to address.

Distinguish machine-generated from human text
Encode signals with error correcting codes
Ensure watermark robustness to edits and translations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Error correcting code watermarking
No distortion in probability distribution
Robust to edits and translations
🔎 Similar Papers
No similar papers found.