🤖 AI Summary
This study addresses the conceptual ambiguity surrounding deepfakes and their disproportionate emphasis on deceptive applications. Analyzing 826 interdisciplinary studies published between 2017 and 2025, it proposes the first three-dimensional classification framework grounded in identity provenance, intent, and operational granularity. Leveraging large language model–driven automated content analysis and temporal topic modeling, the study quantifies deepfake concept evolution for the first time: scholarly discourse is shifting from a “threat-centric” paradigm toward a balanced “risk–value” perspective, with non-deceptive applications—such as medical simulation and educational interaction—exhibiting marked growth. The work challenges reductive negative narratives and delivers a refined theoretical instrument for nuanced policy design. It provides empirical grounding and actionable policy insights to support differentiated regulation and socially beneficial technological governance.
📝 Abstract
Deepfake technologies are often associated with deception, misinformation, and identity fraud, raising legitimate societal concerns. Yet such narratives may obscure a key insight: deepfakes embody sophisticated capabilities for sensory manipulation that can alter human perception, potentially enabling beneficial applications in domains such as healthcare and education. Realizing this potential, however, requires understanding how the technology is conceptualized across disciplines. This paper analyzes 826 peer-reviewed publications from 2017 to 2025 to examine how deepfakes are defined and understood in the literature. Using large language models for content analysis, we categorize deepfake conceptualizations along three dimensions: Identity Source (the relationship between original and generated content), Intent (deceptive versus non-deceptive purposes), and Manipulation Granularity (holistic versus targeted modifications). Results reveal substantial heterogeneity that challenges simplified public narratives. Notably, a subset of studies discuss non-deceptive applications, highlighting an underexplored potential for social good. Temporal analysis shows an evolution from predominantly threat-focused views (2017 to 2019) toward recognition of beneficial applications (2022 to 2025). This study provides an empirical foundation for developing nuanced governance and research frameworks that distinguish applications warranting prohibition from those deserving support, showing that, with safeguards, deepfakes' realism can serve important social purposes beyond deception.