🤖 AI Summary
This study addresses the tripartite challenges—technical feasibility, ethical risks, and legal boundaries—associated with the responsible deployment of deepfake technology in criminal investigations. Methodologically, it adopts an interdisciplinary approach integrating computer science (NLP and image generation), philosophy (ethical evaluation), and law (evidentiary rules and liability attribution), constructing a Computer-Mediated Communication (CMC) analytical framework grounded in social media corpora. The framework synergistically combines natural language processing, social network analysis, and normative legal analysis to establish a multi-dimensional techno-ethical-legal assessment system. Its key contribution lies in pioneering the extension of CMC theory to high-stakes judicial contexts, systematically delineating permissibility thresholds and governance pathways for deepfake use in investigative practice. The study thereby provides theoretically grounded, actionable guidance for developing technical standards, refining judicial policy, and institutionalizing cross-disciplinary governance mechanisms.
📝 Abstract
The emergence of deepfake technologies offers both opportunities and significant challenges. While commonly associated with deception, misinformation, and fraud, deepfakes may also enable novel applications in high-stakes contexts such as criminal investigations. However, these applications raise complex technological, ethical, and legal questions. We adopt an interdisciplinary approach, drawing on computer science, philosophy, and law, to examine what it takes to responsibly use deepfakes in criminal investigations and argue that computer-mediated communication (CMC) research, especially based on social media corpora, can provide crucial insights for understanding the potential harms and benefits of deepfakes. Our analysis outlines key research directions for the CMC community and underscores the need for interdisciplinary collaboration in this evolving domain.