Distilling the Thought, Watermarking the Answer: A Principle Semantic Guided Watermark for Large Reasoning Models

📅 2026-01-08
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a principal semantics-guided adaptive watermarking mechanism to address the common trade-offs in existing large language model watermarking methods, which often compromise logical coherence or incur high computational overhead. The approach decouples text generation into an undisturbed reasoning phase and a watermark-embedded response phase. By extracting core semantic elements from the reasoning trajectory via salience scoring, it constructs a principal semantics vector to dynamically modulate watermark strength. This design preserves logical integrity while significantly enhancing watermark robustness and generation efficiency: it reduces text perplexity by 0.35, improves translation BLEU score by 0.164, increases mathematical accuracy by 0.67 percentage points, and boosts watermark detection AUC by 0.34%, all with negligible impact on inference latency.

Technology Category

Application Category

📝 Abstract
Reasoning Large Language Models (RLLMs) excelling in complex tasks present unique challenges for digital watermarking, as existing methods often disrupt logical coherence or incur high computational costs. Token-based watermarking techniques can corrupt the reasoning flow by applying pseudo-random biases, while semantic-aware approaches improve quality but introduce significant latency or require auxiliary models. This paper introduces ReasonMark, a novel watermarking framework specifically designed for reasoning-intensive LLMs. Our approach decouples generation into an undisturbed Thinking Phase and a watermarked Answering Phase. We propose a Criticality Score to identify semantically pivotal tokens from the reasoning trace, which are distilled into a Principal Semantic Vector (PSV). The PSV then guides a semantically-adaptive mechanism that modulates watermark strength based on token-PSV alignment, ensuring robustness without compromising logical integrity. Extensive experiments show ReasonMark surpasses state-of-the-art methods by reducing text Perplexity by 0.35, increasing translation BLEU score by 0.164, and raising mathematical accuracy by 0.67 points. These advancements are achieved alongside a 0.34% higher watermark detection AUC and stronger robustness to attacks, all with a negligible increase in latency. This work enables the traceable and trustworthy deployment of reasoning LLMs in real-world applications.
Problem

Research questions and friction points this paper is trying to address.

watermarking
reasoning LLMs
logical coherence
computational cost
semantic integrity
Innovation

Methods, ideas, or system contributions that make the work stand out.

ReasonMark
Principal Semantic Vector
semantic-aware watermarking
reasoning LLMs
criticality score
🔎 Similar Papers
No similar papers found.
Shuliang Liu
Shuliang Liu
PhD, HKUST(GZ)
Trustworthy LLMVLMRecommendation System
X
Xingyu Li
The Hong Kong University of Science and Technology (Guangzhou)
H
Hongyi Liu
The Hong Kong University of Science and Technology (Guangzhou)
Yibo Yan
Yibo Yan
East China Normal University
High-dimensional Statistics
B
Bingchen Duan
The Hong Kong University of Science and Technology (Guangzhou)
Q
Qi Zheng
The Hong Kong University of Science and Technology (Guangzhou)
D
Dong Fang
Independent Researcher
L
Lingfeng Su
Independent Researcher
Xuming Hu
Xuming Hu
Assistant Professor, HKUST(GZ) / HKUST
Natural Language ProcessingLarge Language Model