Deepfakes: we need to re-think the concept of "real" images

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current deepfake detection research relies on outdated “real” image benchmarks (e.g., ImageNet) that fail to represent modern smartphone computational photography—where multi-frame fusion and neural rendering pipelines produce images whose generative mechanisms are structurally isomorphic to those of generative models. Method: The paper systematically analyzes architectural and procedural parallels between computational photography and generative modeling, demonstrating the collapse of the technical foundation underpinning the real/fake dichotomy. Contribution/Results: It introduces, for the first time, a critical examination of the technological crisis in defining “real images,” exposes fundamental flaws in prevailing evaluation paradigms, and advocates for new benchmark datasets and detection standards grounded in contemporary imaging physics and pipeline-aware modeling. Rather than proposing a novel detector, this work reframes the field’s epistemological foundations—shifting the objective of forgery detection from binary authenticity classification toward imaging-provenance attribution.

Technology Category

Application Category

📝 Abstract
The wide availability and low usability barrier of modern image generation models has triggered the reasonable fear of criminal misconduct and negative social implications. The machine learning community has been engaging this problem with an extensive series of publications proposing algorithmic solutions for the detection of "fake", e.g. entirely generated or partially manipulated images. While there is undoubtedly some progress towards technical solutions of the problem, we argue that current and prior work is focusing too much on generative algorithms and "fake" data-samples, neglecting a clear definition and data collection of "real" images. The fundamental question "what is a real image?" might appear to be quite philosophical, but our analysis shows that the development and evaluation of basically all current "fake"-detection methods is relying on only a few, quite old low-resolution datasets of "real" images like ImageNet. However, the technology for the acquisition of "real" images, aka taking photos, has drastically evolved over the last decade: Today, over 90% of all photographs are produced by smartphones which typically use algorithms to compute an image from multiple inputs (over time) from multiple sensors. Based on the fact that these image formation algorithms are typically neural network architectures which are closely related to "fake"-image generators, we state the position that today, we need to re-think the concept of "real" images. The purpose of this position paper is to raise the awareness of the current shortcomings in this active field of research and to trigger an open discussion whether the detection of "fake" images is a sound objective at all. At the very least, we need a clear technical definition of "real" images and new benchmark datasets.
Problem

Research questions and friction points this paper is trying to address.

Redefining real images in deepfake detection research
Addressing outdated real image datasets for fake detection
Questioning the feasibility of current fake detection objectives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Redefining real images using modern smartphone technology
Proposing new benchmark datasets for fake detection
Challenging current fake detection methods' foundational assumptions
🔎 Similar Papers
No similar papers found.