🤖 AI Summary
This work proposes DistSeal, a novel latent-space watermarking framework that unifies support for both diffusion and autoregressive generative models—an advancement not previously achieved. Unlike existing methods that operate in pixel space and suffer from high computational overhead and visible artifacts, DistSeal trains a post-processing watermarking module in the latent space and subsequently distills it into either the generator or the decoder. This approach enables highly efficient watermark embedding while preserving imperceptibility. Experimental results demonstrate that DistSeal significantly enhances robustness against various attacks and achieves up to a 20× speedup in inference compared to pixel-space watermarking techniques, without compromising visual quality.
📝 Abstract
Existing approaches for watermarking AI-generated images often rely on post-hoc methods applied in pixel space, introducing computational overhead and potential visual artifacts. In this work, we explore latent space watermarking and introduce DistSeal, a unified approach for latent watermarking that works across both diffusion and autoregressive models. Our approach works by training post-hoc watermarking models in the latent space of generative models. We demonstrate that these latent watermarkers can be effectively distilled either into the generative model itself or into the latent decoder, enabling in-model watermarking. The resulting latent watermarks achieve competitive robustness while offering similar imperceptibility and up to 20x speedup compared to pixel-space baselines. Our experiments further reveal that distilling latent watermarkers outperforms distilling pixel-space ones, providing a solution that is both more efficient and more robust.