Causal Fingerprints of AI Generative Models

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing AI-generated image fingerprinting methods rely on model-specific artifacts, exhibiting poor generalizability and weak causal interpretability. To address this, we propose *causal fingerprints*, a novel framework grounded in causal disentanglement that explicitly models the causal relationship between image provenance and model-induced traces. Our approach constructs a semantic-invariant latent space and leverages pre-trained diffusion models to reconstruct residuals, thereby isolating model-specific artifacts. We further enhance fine-grained discrimination via multi-feature representation learning and validate source anonymization efficacy through counterfactual generation. This work constitutes the first systematic causal modeling of generative model fingerprints. It achieves cross-architecture source attribution across both GANs and diffusion models, significantly outperforming state-of-the-art methods. The framework supports critical applications including deepfake detection, model copyright attribution, and identity-preserving anonymization.

Technology Category

Application Category

📝 Abstract
AI generative models leave implicit traces in their generated images, which are commonly referred to as model fingerprints and are exploited for source attribution. Prior methods rely on model-specific cues or synthesis artifacts, yielding limited fingerprints that may generalize poorly across different generative models. We argue that a complete model fingerprint should reflect the causality between image provenance and model traces, a direction largely unexplored. To this end, we conceptualize the emph{causal fingerprint} of generative models, and propose a causality-decoupling framework that disentangles it from image-specific content and style in a semantic-invariant latent space derived from pre-trained diffusion reconstruction residual. We further enhance fingerprint granularity with diverse feature representations. We validate causality by assessing attribution performance across representative GANs and diffusion models and by achieving source anonymization using counterfactual examples generated from causal fingerprints. Experiments show our approach outperforms existing methods in model attribution, indicating strong potential for forgery detection, model copyright tracing, and identity protection.
Problem

Research questions and friction points this paper is trying to address.

Identifying causal fingerprints for AI model attribution
Decoupling generative model traces from image content
Enhancing fingerprint granularity across diverse AI models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causality-decoupling framework disentangles fingerprints
Semantic-invariant latent space from diffusion residuals
Enhanced granularity with diverse feature representations
🔎 Similar Papers
No similar papers found.
H
Hui Xu
Faculty of Data Science, City University of Macau, Macao SAR, China
C
Chi Liu
Faculty of Data Science, City University of Macau, Macao SAR, China
Congcong Zhu
Congcong Zhu
USTC
Multimedia Understanding
M
Minghao Wang
Faculty of Data Science, City University of Macau, Macao SAR, China
Youyang Qu
Youyang Qu
Research Scientist, Data61, CSIRO, Adjunct Assoc. Prof. @Swinburne, SMIEEE
Privacy ProtectionMachine LearningBlockchainEdge Intelligence.
Longxiang Gao
Longxiang Gao
Professor, Qilu University of Technology; Adjunct Professor, University of Southern Queensland
Edge AIFederated LearningMachine LearningQuantum Computing