Beyond A Fixed Seal: Adaptive Stealing Watermark in Large Language Models

๐Ÿ“… 2026-04-12
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limitations of existing watermark extraction methods, which rely on fixed strategies and fail to account for the non-uniform distribution of watermarks and the dynamic nature of large language model (LLM) generation processes, thereby achieving suboptimal attack efficiency. To overcome these challenges, the paper proposes an Adaptive Stealing (AS) approach that introduces, for the first time, a dynamic viewpoint selection mechanism. AS integrates position-aware watermark construction and models contextually ordered token activation states, enabling it to adaptively select the optimal attack strategy from multiple perspectives based on watermark compatibility, generation priority, and dynamic relevance. Experimental results demonstrate that AS significantly improves watermark extraction efficiency under identical conditions, exposing the vulnerability of current watermarking schemes to adaptive attacks.

Technology Category

Application Category

๐Ÿ“ Abstract
Watermarking provides a critical safeguard for large language model (LLM) services by facilitating the detection of LLM-generated text. Correspondingly, stealing watermark algorithms (SWAs) derive watermark information from watermarked texts generated by victim LLMs to craft highly targeted adversarial attacks, which compromise the reliability of watermarks. Existing SWAs rely on fixed strategies, overlooking the non-uniform distribution of stolen watermark information and the dynamic nature of real-world LLM generation processes. To address these limitations, we propose Adaptive Stealing (AS), a novel SWA featuring enhanced design flexibility through Position-Based Seal Construction and Adaptive Selection modules. AS operates by defining multiple attack perspectives derived from distinct activation states of contextually ordered tokens. During attack execution, AS dynamically selects the optimal perspective based on watermark compatibility, generation priority, and dynamic generation relevance. Our experiments demonstrate that AS significantly increases steal efficiency against target watermarks under identical experimental conditions. These findings highlight the need for more robust LLM watermarks to withstand potential attacks. We release our code to the community for future research\footnote{https://github.com/DrankXs/AdaptiveStealingWatermark}.
Problem

Research questions and friction points this paper is trying to address.

watermarking
stealing watermark algorithms
large language models
adversarial attacks
dynamic generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Stealing
Watermarking
Large Language Models
Adversarial Attacks
Position-Based Seal Construction
๐Ÿ”Ž Similar Papers
No similar papers found.