Breaking Semantic-Aware Watermarks via LLM-Guided Coherence-Preserving Semantic Injection

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of existing semantic-aware watermarking schemes to large language model (LLM)-guided semantic perturbation attacks, which can compromise provenance tracing. The authors propose a novel semantic injection attack that preserves visual-semantic consistency by leveraging the semantic reasoning capabilities of LLMs together with embedding-space similarity constraints. This approach precisely perturbs high-level semantics associated with the watermark while maintaining global image coherence, thereby deceiving detection mechanisms. Notably, this method is the first to demonstrate and exploit the capacity of LLMs to launch targeted attacks against semantic watermarks, challenging the foundational security assumptions of content-aware watermarking. Extensive evaluations show that the proposed attack significantly outperforms existing approaches across multiple state-of-the-art watermarking schemes, revealing a fundamental security flaw in current semantic watermarking under LLM-driven perturbations.

Technology Category

Application Category

📝 Abstract
Generative images have proliferated on Web platforms in social media and online copyright distribution scenarios, and semantic watermarking has increasingly been integrated into diffusion models to support reliable provenance tracking and forgery prevention for web content. Traditional noise-layer-based watermarking, however, remains vulnerable to inversion attacks that can recover embedded signals. To mitigate this, recent content-aware semantic watermarking schemes bind watermark signals to high-level image semantics, constraining local edits that would otherwise disrupt global coherence. Yet, large language models (LLMs) possess structured reasoning capabilities that enable targeted exploration of semantic spaces, allowing locally fine-grained but globally coherent semantic alterations that invalidate such bindings. To expose this overlooked vulnerability, we introduce a Coherence-Preserving Semantic Injection (CSI) attack that leverages LLM-guided semantic manipulation under embedding-space similarity constraints. This alignment enforces visual-semantic consistency while selectively perturbing watermark-relevant semantics, ultimately inducing detector misclassification. Extensive empirical results show that CSI consistently outperforms prevailing attack baselines against content-aware semantic watermarking, revealing a fundamental security weakness of current semantic watermark designs when confronted with LLM-driven semantic perturbations.
Problem

Research questions and friction points this paper is trying to address.

semantic watermarking
LLM-guided attack
coherence-preserving
watermark vulnerability
generative images
Innovation

Methods, ideas, or system contributions that make the work stand out.

semantic watermarking
large language models
coherence-preserving attack
embedding-space manipulation
provenance tracking
🔎 Similar Papers
No similar papers found.
Z
Zheng Gao
University of New South Wales
Xiaoyu Li
Xiaoyu Li
University of New South Wales
Learning TheoryOptimizationLLM
Z
Zhicheng Bao
University of New South Wales
X
Xiaoyan Feng
Griffith University
Jiaojiao Jiang
Jiaojiao Jiang
The University of New South Wales
Social Network Analysis and Service Virtualisation