🤖 AI Summary
Generative audio editing poses significant risks of copyright infringement and deepfake misuse, yet existing watermarking methods struggle to simultaneously ensure robust detection and precise attribution. To address this, we propose an end-to-end robust audio watermarking framework. Our method introduces three key innovations: (1) a generator–detector architecture with partial parameter sharing; (2) a cross-modal cross-attention mechanism to enhance message retrieval efficiency; and (3) a psychoacoustically aligned time-frequency masking loss to guarantee perceptual transparency. The framework further integrates temporal conditional modeling, adversarial training, and multi-transform robustness optimization. Experiments demonstrate state-of-the-art performance in both detection accuracy and attribution precision. Notably, the method maintains over 98% robustness against more than 30 diverse attacks—including resampling, compression, ASR transcription, and aggressive generative edits (e.g., Whisper + Diffusion).
📝 Abstract
The rapid proliferation of generative audio synthesis and editing technologies has raised significant concerns about copyright infringement, data provenance, and the spread of misinformation through deepfake audio. Watermarking offers a proactive solution by embedding imperceptible, identifiable, and traceable marks into audio content. While recent neural network-based watermarking methods like WavMark and AudioSeal have improved robustness and quality, they struggle to achieve both robust detection and accurate attribution simultaneously. This paper introduces Cross-Attention Robust Audio Watermark (XAttnMark), which bridges this gap by leveraging partial parameter sharing between the generator and the detector, a cross-attention mechanism for efficient message retrieval, and a temporal conditioning module for improved message distribution. Additionally, we propose a psychoacoustic-aligned temporal-frequency masking loss that captures fine-grained auditory masking effects, enhancing watermark imperceptibility. Our approach achieves state-of-the-art performance in both detection and attribution, demonstrating superior robustness against a wide range of audio transformations, including challenging generative editing with strong editing strength. The project webpage is available at https://liuyixin-louis.github.io/xattnmark/.