🤖 AI Summary
This study addresses the dual-use nature of artificial intelligence in cybersecurity, which has given rise to novel threats such as deepfakes, adversarial attacks, automated malware, and AI-driven social engineering—challenges that existing defense mechanisms struggle to counter effectively. Through a systematic review of over 70 academic and industry studies, this work proposes a comparative classification framework that integrates AI capabilities with threat modalities to clarify the technical mechanisms, representative cases, and governance strategies across four core threat categories. Combining literature synthesis, threat modeling, and cross-domain analysis, the research identifies critical directions including hybrid detection pipelines and benchmarking frameworks, while emphasizing explainability, interdisciplinary collaboration, and regulatory compliance. The findings offer both theoretical grounding and practical pathways for developing trustworthy, robust, and compliant AI-powered cybersecurity defenses.
📝 Abstract
Artificial Intelligence's dual-use nature is revolutionizing the cybersecurity landscape, introducing new threats across four main categories: deepfakes and synthetic media, adversarial AI attacks, automated malware, and AI-powered social engineering. This paper aims to analyze emerging risks, attack mechanisms, and defense shortcomings related to AI in cybersecurity. We introduce a comparative taxonomy connecting AI capabilities with threat modalities and defenses, review over 70 academic and industry references, and identify impactful opportunities for research, such as hybrid detection pipelines and benchmarking frameworks. The paper is structured thematically by threat type, with each section addressing technical context, real-world incidents, legal frameworks, and countermeasures. Our findings emphasize the urgency for explainable, interdisciplinary, and regulatory-compliant AI defense systems to maintain trust and security in digital ecosystems.