🤖 AI Summary
This study addresses the inefficiency of current network crime detection and digital forensics. We propose a deep AI integration framework: (1) leveraging large language models (LLMs)—including Gemini, Copilot, and ChatGPT—to enhance threat identification, malware analysis, and automated data extraction, thereby significantly improving detection accuracy and analytical timeliness; and (2) systematically exposing, for the first time, the anti-forensic risk posed by mainstream chatbots’ misuse in generating steganography code—demonstrating empirically their capability to evade conventional detection mechanisms. Through multi-case code generation and behavioral simulation, we validate AI’s efficacy in augmenting forensic precision and predictive capability, while characterizing prevalent generative-AI abuse patterns. Our work establishes both theoretical foundations and empirical evidence for a dual-track security paradigm: “AI-empowered defense” coupled with “AI-driven countermeasures.” (149 words)
📝 Abstract
According to a recent EUROPOL report, cybercrime is still recurrent in Europe, and different activities and countermeasures must be taken to limit, prevent, detect, analyze, and fight it. Cybercrime must be prevented with specific measures, tools, and techniques, for example through automated network and malware analysis. Countermeasures against cybercrime can also be improved with proper df analysis in order to extract data from digital devices trying to retrieve information on the cybercriminals. Indeed, results obtained through a proper df analysis can be leveraged to train cybercrime detection systems to prevent the success of similar crimes. Nowadays, some systems have started to adopt Artificial Intelligence (AI) algorithms for cyberattack detection and df analysis improvement. However, AI can be better applied as an additional instrument in these systems to improve the detection and in the df analysis. For this reason, we highlight how cybercrime analysis and df procedures can take advantage of AI. On the other hand, cybercriminals can use these systems to improve their skills, bypass automatic detection, and develop advanced attack techniques. The case study we presented highlights how it is possible to integrate the use of the three popular chatbots { t Gemini}, { t Copilot} and { t chatGPT} to develop a Python code to encode and decoded images with steganographic technique, even though their presence is not an indicator of crime, attack or maliciousness but used by a cybercriminal as anti-forensics technique.