🤖 AI Summary
This work addresses the vulnerability of existing semantic-aware watermarking schemes to large language model (LLM)-guided semantic perturbation attacks, which can compromise provenance tracing. The authors propose a novel semantic injection attack that preserves visual-semantic consistency by leveraging the semantic reasoning capabilities of LLMs together with embedding-space similarity constraints. This approach precisely perturbs high-level semantics associated with the watermark while maintaining global image coherence, thereby deceiving detection mechanisms. Notably, this method is the first to demonstrate and exploit the capacity of LLMs to launch targeted attacks against semantic watermarks, challenging the foundational security assumptions of content-aware watermarking. Extensive evaluations show that the proposed attack significantly outperforms existing approaches across multiple state-of-the-art watermarking schemes, revealing a fundamental security flaw in current semantic watermarking under LLM-driven perturbations.
📝 Abstract
Generative images have proliferated on Web platforms in social media and online copyright distribution scenarios, and semantic watermarking has increasingly been integrated into diffusion models to support reliable provenance tracking and forgery prevention for web content. Traditional noise-layer-based watermarking, however, remains vulnerable to inversion attacks that can recover embedded signals. To mitigate this, recent content-aware semantic watermarking schemes bind watermark signals to high-level image semantics, constraining local edits that would otherwise disrupt global coherence. Yet, large language models (LLMs) possess structured reasoning capabilities that enable targeted exploration of semantic spaces, allowing locally fine-grained but globally coherent semantic alterations that invalidate such bindings. To expose this overlooked vulnerability, we introduce a Coherence-Preserving Semantic Injection (CSI) attack that leverages LLM-guided semantic manipulation under embedding-space similarity constraints. This alignment enforces visual-semantic consistency while selectively perturbing watermark-relevant semantics, ultimately inducing detector misclassification. Extensive empirical results show that CSI consistently outperforms prevailing attack baselines against content-aware semantic watermarking, revealing a fundamental security weakness of current semantic watermark designs when confronted with LLM-driven semantic perturbations.