🤖 AI Summary
This work addresses a critical limitation in existing watermarking methods for vision-language models: their failure to account for the dynamic shifts in visual dependencies during generation, which often leads to the insertion of irrelevant or low-quality tokens that degrade semantic fidelity. To overcome this, the authors propose an attention-guided dynamic watermarking framework that, at each decoding step, integrates attention weights with contextual coherence to dynamically identify semantically salient evidence. The method adaptively partitions the vocabulary for watermark embedding based on token uncertainty and evidence density. This approach represents the first dynamic watermarking strategy that jointly leverages attention mechanisms and evidence-density calibration, achieving high detection accuracy (AUC > 99.36%) and robustness against attacks (88.61% resilience) while significantly improving generation quality—particularly enhancing visual-semantic fidelity in later generation stages.
📝 Abstract
Watermarking has emerged as a pivotal solution for content traceability and intellectual property protection in Large Vision-Language Models (LVLMs). However, vision-agnostic watermarks may introduce visually irrelevant tokens and disrupt visual grounding by enforcing indiscriminate pseudo-random biases. Additionally, current vision-specific watermarks rely on a static, one-time estimation of vision critical weights and ignore the weight distribution density when determining the proportion of protected tokens. This design fails to account for dynamic changes in visual dependence during generation and may introduce low-quality tokens in the long tail. To address these challenges, we propose Attention-Guided Dynamic Watermarking (AGMark), a novel framework that embeds detectable signals while strictly preserving visual fidelity. At each decoding step, AGMark first dynamically identifies semantic-critical evidence based on attention weights for visual relevance, together with context-aware coherence cues, resulting in a more adaptive and well-calibrated evidence-weight distribution. It then determines the proportion of semantic-critical tokens by jointly considering uncertainty awareness (token entropy) and evidence calibration (weight density), thereby enabling adaptive vocabulary partitioning to avoid irrelevant tokens. Empirical results confirm that AGMark outperforms conventional methods, observably improving generation quality and yielding particularly strong gains in visual semantic fidelity in the later stages of generation. The framework maintains highly competitive detection accuracy (at least 99.36\% AUC) and robust attack resilience (at least 88.61\% AUC) without sacrificing inference efficiency, effectively establishing a new standard for reliability-preserving multi-modal watermarking.