WaterMod: Modular Token-Rank Partitioning for Probability-Balanced LLM Watermarking

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional logit watermarking suffers from random token segmentation that excludes high-probability tokens, degrading text fluency. To address this, we propose WaterMod—a probabilistic-aware modular watermarking method. Its core innovation is a balanced partitioning mechanism that ranks tokens by probability and applies modular arithmetic to ensure high-probability tokens remain in the sampling pool. WaterMod further integrates entropy-adaptive gating, pseudo-random green-token selection, and micro-bias injection to enable fine-grained, verifiable provenance tracing under both zero-bit and multi-bit modes. Experiments across text generation, mathematical reasoning, and code generation tasks demonstrate that WaterMod achieves state-of-the-art detection performance (F1 > 0.95) while preserving human-rated fluency—BLEU and CodeBLEU scores show no statistically significant degradation. Moreover, WaterMod supports flexible watermark capacity configuration without compromising robustness or utility.

Technology Category

Application Category

📝 Abstract
Large language models now draft news, legal analyses, and software code with human-level fluency. At the same time, regulations such as the EU AI Act mandate that each synthetic passage carry an imperceptible, machine-verifiable mark for provenance. Conventional logit-based watermarks satisfy this requirement by selecting a pseudorandom green vocabulary at every decoding step and boosting its logits, yet the random split can exclude the highest-probability token and thus erode fluency. WaterMod mitigates this limitation through a probability-aware modular rule. The vocabulary is first sorted in descending model probability; the resulting ranks are then partitioned by the residue rank mod k, which distributes adjacent-and therefore semantically similar-tokens across different classes. A fixed bias of small magnitude is applied to one selected class. In the zero-bit setting (k=2), an entropy-adaptive gate selects either the even or the odd parity as the green list. Because the top two ranks fall into different parities, this choice embeds a detectable signal while guaranteeing that at least one high-probability token remains available for sampling. In the multi-bit regime (k>2), the current payload digit d selects the color class whose ranks satisfy rank mod k = d. Biasing the logits of that class embeds exactly one base-k digit per decoding step, thereby enabling fine-grained provenance tracing. The same modular arithmetic therefore supports both binary attribution and rich payloads. Experimental results demonstrate that WaterMod consistently attains strong watermark detection performance while maintaining generation quality in both zero-bit and multi-bit settings. This robustness holds across a range of tasks, including natural language generation, mathematical reasoning, and code synthesis. Our code and data are available at https://github.com/Shinwoo-Park/WaterMod.
Problem

Research questions and friction points this paper is trying to address.

Conventional watermarks exclude high-probability tokens, reducing LLM fluency
Existing methods lack fine-grained provenance tracing for synthetic content
Current approaches struggle to balance watermark strength with text quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular token-rank partitioning balances watermarking and fluency
Entropy-adaptive gate selects parity to preserve high-probability tokens
Modular arithmetic enables both binary attribution and multi-bit payloads
🔎 Similar Papers
S
Shinwoo Park
Yonsei University, Seoul, Republic of Korea
H
Hyejin Park
Rensselaer Polytechnic Institute, Troy, NY , USA
H
Hyeseon Ahn
Yonsei University, Seoul, Republic of Korea
Yo-Sub Han
Yo-Sub Han
School of Computing, Yonsei University
automata theoryformal languagesalgorithm designinformation retrieval