🤖 AI Summary
To address the growing threat of phishing emails generated by large language models (LLMs) and corrupted by adversarial perturbations—which severely degrade the performance of conventional detection methods—this paper proposes a robust machine learning–based detection framework. Our method introduces a novel joint preprocessing mechanism integrating spelling correction and subword tokenization to mitigate semantic ambiguity and resist character-level adversarial distortions. We further enhance model resilience by training and evaluating on diverse NLP features alongside adversarial examples synthesized via TextAttack. Experimental results on public benchmark datasets achieve 94.26% accuracy and 84.39% F1-score. Crucially, the framework demonstrates strong generalization and robustness against phishing emails generated by ChatGPT and Llama, as well as under multiple adversarial attack paradigms. This significantly improves the practicality and reliability of phishing detection in emerging, LLM-driven threat landscapes.
📝 Abstract
Phishing remains a critical cybersecurity threat, especially with the advent of large language models (LLMs) capable of generating highly convincing malicious content. Unlike earlier phishing attempts which are identifiable by grammatical errors, misspellings, incorrect phrasing, and inconsistent formatting, LLM generated emails are grammatically sound, contextually relevant, and linguistically natural. These advancements make phishing emails increasingly difficult to distinguish from legitimate ones, challenging traditional detection mechanisms. Conventional phishing detection systems often fail when faced with emails crafted by LLMs or manipulated using adversarial perturbation techniques. To address this challenge, we propose a robust phishing email detection system featuring an enhanced text preprocessing pipeline. This pipeline includes spelling correction and word splitting to counteract adversarial modifications and improve detection accuracy. Our approach integrates widely adopted natural language processing (NLP) feature extraction techniques and machine learning algorithms. We evaluate our models on publicly available datasets comprising both phishing and legitimate emails, achieving a detection accuracy of 94.26% and F1-score of 84.39% in model deployment setting. To assess robustness, we further evaluate our models using adversarial phishing samples generated by four attack methods in Python TextAttack framework. Additionally, we evaluate models' performance against phishing emails generated by LLMs including ChatGPT and Llama. Results highlight the resilience of models against evolving AI-powered phishing threats.