Invisible Entropy: Towards Safe and Efficient Low-Entropy LLM Watermarking

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM watermarking methods suffer from degraded text naturalness and poor detection robustness in low-entropy generation scenarios; their reliance on original-model entropy computation incurs high computational overhead, detection latency, and model leakage risks. Method: We propose a model-free, lightweight watermarking paradigm: (i) a novel entropy prediction mechanism leveraging a lightweight feature extractor and entropy annotator—requiring no access to the original LLM; and (ii) a theory-driven adaptive threshold navigator that dynamically modulates green/red token distributions in the logit space to jointly preserve textual fluency and enhance detection robustness. Contribution/Results: Our method reduces parameter count by 99%, achieves state-of-the-art detection performance on HumanEval and MBPP, and fully eliminates model dependency and associated computational costs.

Technology Category

Application Category

📝 Abstract
Logit-based LLM watermarking traces and verifies AI-generated content by maintaining green and red token lists and increasing the likelihood of green tokens during generation. However, it fails in low-entropy scenarios, where predictable outputs make green token selection difficult without disrupting natural text flow. Existing approaches address this by assuming access to the original LLM to calculate entropy and selectively watermark high-entropy tokens. However, these methods face two major challenges: (1) high computational costs and detection delays due to reliance on the original LLM, and (2) potential risks of model leakage. To address these limitations, we propose Invisible Entropy (IE), a watermarking paradigm designed to enhance both safety and efficiency. Instead of relying on the original LLM, IE introduces a lightweight feature extractor and an entropy tagger to predict whether the entropy of the next token is high or low. Furthermore, based on theoretical analysis, we develop a threshold navigator that adaptively sets entropy thresholds. It identifies a threshold where the watermark ratio decreases as the green token count increases, enhancing the naturalness of the watermarked text and improving detection robustness. Experiments on HumanEval and MBPP datasets demonstrate that IE reduces parameter size by 99% while achieving performance on par with state-of-the-art methods. Our work introduces a safe and efficient paradigm for low-entropy watermarking. https://github.com/Carol-gutianle/IE https://huggingface.co/datasets/Carol0110/IE-Tagger
Problem

Research questions and friction points this paper is trying to address.

Overcoming low-entropy challenges in LLM watermarking
Reducing computational costs and model leakage risks
Enhancing watermarking safety and efficiency adaptively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight feature extractor predicts token entropy
Adaptive entropy threshold enhances text naturalness
99% parameter reduction with state-of-the-art performance
🔎 Similar Papers
No similar papers found.