XAttnMark: Learning Robust Audio Watermarking with Cross-Attention

📅 2025-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Generative audio editing poses significant risks of copyright infringement and deepfake misuse, yet existing watermarking methods struggle to simultaneously ensure robust detection and precise attribution. To address this, we propose an end-to-end robust audio watermarking framework. Our method introduces three key innovations: (1) a generator–detector architecture with partial parameter sharing; (2) a cross-modal cross-attention mechanism to enhance message retrieval efficiency; and (3) a psychoacoustically aligned time-frequency masking loss to guarantee perceptual transparency. The framework further integrates temporal conditional modeling, adversarial training, and multi-transform robustness optimization. Experiments demonstrate state-of-the-art performance in both detection accuracy and attribution precision. Notably, the method maintains over 98% robustness against more than 30 diverse attacks—including resampling, compression, ASR transcription, and aggressive generative edits (e.g., Whisper + Diffusion).

Technology Category

Application Category

📝 Abstract
The rapid proliferation of generative audio synthesis and editing technologies has raised significant concerns about copyright infringement, data provenance, and the spread of misinformation through deepfake audio. Watermarking offers a proactive solution by embedding imperceptible, identifiable, and traceable marks into audio content. While recent neural network-based watermarking methods like WavMark and AudioSeal have improved robustness and quality, they struggle to achieve both robust detection and accurate attribution simultaneously. This paper introduces Cross-Attention Robust Audio Watermark (XAttnMark), which bridges this gap by leveraging partial parameter sharing between the generator and the detector, a cross-attention mechanism for efficient message retrieval, and a temporal conditioning module for improved message distribution. Additionally, we propose a psychoacoustic-aligned temporal-frequency masking loss that captures fine-grained auditory masking effects, enhancing watermark imperceptibility. Our approach achieves state-of-the-art performance in both detection and attribution, demonstrating superior robustness against a wide range of audio transformations, including challenging generative editing with strong editing strength. The project webpage is available at https://liuyixin-louis.github.io/xattnmark/.
Problem

Research questions and friction points this paper is trying to address.

Enhances audio watermark robustness
Improves detection and attribution accuracy
Addresses copyright and misinformation concerns
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-attention mechanism
Partial parameter sharing
Psychoacoustic-aligned masking loss
🔎 Similar Papers
No similar papers found.
Y
Yixin Liu
Department of Computer Science, Lehigh University, Bethlehem, PA, USA; Dolby Laboratories Inc., San Francisco, CA, USA
Lie Lu
Lie Lu
Dolby Laboratories
Machine learningaudio/multimedia processingunderstanding and generation
J
Jihui Jin
Dolby Laboratories Inc., San Francisco, CA, USA
L
Lichao Sun
Department of Computer Science, Lehigh University, Bethlehem, PA, USA
Andrea Fanelli
Andrea Fanelli
Principal Researcher at Dolby Laboratories
Multimodal AIAudio AIMachine PerceptionBiomedical Signal ProcessingWearable Devices