AGMark: Attention-Guided Dynamic Watermarking for Large Vision-Language Models

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical limitation in existing watermarking methods for vision-language models: their failure to account for the dynamic shifts in visual dependencies during generation, which often leads to the insertion of irrelevant or low-quality tokens that degrade semantic fidelity. To overcome this, the authors propose an attention-guided dynamic watermarking framework that, at each decoding step, integrates attention weights with contextual coherence to dynamically identify semantically salient evidence. The method adaptively partitions the vocabulary for watermark embedding based on token uncertainty and evidence density. This approach represents the first dynamic watermarking strategy that jointly leverages attention mechanisms and evidence-density calibration, achieving high detection accuracy (AUC > 99.36%) and robustness against attacks (88.61% resilience) while significantly improving generation quality—particularly enhancing visual-semantic fidelity in later generation stages.

Technology Category

Application Category

📝 Abstract
Watermarking has emerged as a pivotal solution for content traceability and intellectual property protection in Large Vision-Language Models (LVLMs). However, vision-agnostic watermarks may introduce visually irrelevant tokens and disrupt visual grounding by enforcing indiscriminate pseudo-random biases. Additionally, current vision-specific watermarks rely on a static, one-time estimation of vision critical weights and ignore the weight distribution density when determining the proportion of protected tokens. This design fails to account for dynamic changes in visual dependence during generation and may introduce low-quality tokens in the long tail. To address these challenges, we propose Attention-Guided Dynamic Watermarking (AGMark), a novel framework that embeds detectable signals while strictly preserving visual fidelity. At each decoding step, AGMark first dynamically identifies semantic-critical evidence based on attention weights for visual relevance, together with context-aware coherence cues, resulting in a more adaptive and well-calibrated evidence-weight distribution. It then determines the proportion of semantic-critical tokens by jointly considering uncertainty awareness (token entropy) and evidence calibration (weight density), thereby enabling adaptive vocabulary partitioning to avoid irrelevant tokens. Empirical results confirm that AGMark outperforms conventional methods, observably improving generation quality and yielding particularly strong gains in visual semantic fidelity in the later stages of generation. The framework maintains highly competitive detection accuracy (at least 99.36\% AUC) and robust attack resilience (at least 88.61\% AUC) without sacrificing inference efficiency, effectively establishing a new standard for reliability-preserving multi-modal watermarking.
Problem

Research questions and friction points this paper is trying to address.

watermarking
Large Vision-Language Models
visual grounding
dynamic visual dependence
semantic fidelity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attention-Guided Watermarking
Dynamic Token Protection
Visual Semantic Fidelity
Adaptive Vocabulary Partitioning
Multi-modal Watermarking
🔎 Similar Papers
No similar papers found.
Y
Yue Li
East China Normal University
X
Xin Yi
East China Normal University
D
Dongsheng Shi
East China Normal University
Y
Yongyi Cui
East China Normal University
Gerard de Melo
Gerard de Melo
Professor at Hasso Plattner Institute / University of Potsdam
Artificial IntelligenceNatural Language ProcessingWeb Mining
L
Linlin Wang
East China Normal University