Meta-FC: Meta-Learning with Feature Consistency for Robust and Generalizable Watermarking

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing deep learning-based watermarking methods predominantly adopt a single random distortion (SRD) training strategy, which overlooks the interdependencies among different distortions, leading to optimization conflicts that hinder model robustness and generalization. To address this limitation, this work proposes a novel Meta-FC training strategy that, for the first time, integrates meta-learning with a feature consistency loss. Specifically, meta-learning is employed to simulate unseen distortion scenarios, while the feature consistency loss encourages stable and invariant feature representations across diverse distortions. This synergistic approach effectively mitigates optimization conflicts in multi-distortion settings, yielding consistent performance gains: on average, the proposed method improves watermark robustness by 1.59% under high-intensity distortions, enhances generalization by 4.71% under composite distortions, and achieves a 2.38% gain under previously unseen distortions.

Technology Category

Application Category

📝 Abstract
Deep learning-based watermarking has made remarkable progress in recent years. To achieve robustness against various distortions, current methods commonly adopt a training strategy where a \underline{\textbf{s}}ingle \underline{\textbf{r}}andom \underline{\textbf{d}}istortion (SRD) is chosen as the noise layer in each training batch. However, the SRD strategy treats distortions independently within each batch, neglecting the inherent relationships among different types of distortions and causing optimization conflicts across batches. As a result, the robustness and generalizability of the watermarking model are limited. To address this issue, we propose a novel training strategy that enhances robustness and generalization via \underline{\textbf{meta}}-learning with \underline{\textbf{f}}eature \underline{\textbf{c}}onsistency (Meta-FC). Specifically, we randomly sample multiple distortions from the noise pool to construct a meta-training task, while holding out one distortion as a simulated ``unknown'' distortion for the meta-testing phase. Through meta-learning, the model is encouraged to identify and utilize neurons that exhibit stable activations across different types of distortions, mitigating the optimization conflicts caused by the random sampling of diverse distortions in each batch. To further promote the transformation of stable activations into distortion-invariant representations, we introduce a feature consistency loss that constrains the decoded features of the same image subjected to different distortions to remain consistent. Extensive experiments demonstrate that, compared to the SRD training strategy, Meta-FC improves the robustness and generalization of various watermarking models by an average of 1.59\%, 4.71\%, and 2.38\% under high-intensity, combined, and unknown distortions.
Problem

Research questions and friction points this paper is trying to address.

watermarking
robustness
generalization
distortion
meta-learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

meta-learning
feature consistency
robust watermarking
generalization
distortion-invariant representation
🔎 Similar Papers
No similar papers found.