🤖 AI Summary
Large language models (LLMs) are increasingly exploited to generate highly realistic, evasive phishing emails, posing a critical security threat. Existing semantic-detection methods suffer from poor generalizability, heavy computational overhead, and reliance on post-hoc linguistic features. Method: This paper proposes Paladin, a proactive defense framework that embeds both implicit and explicit detectable markers directly into LLM-generated text via a novel trigger-label association mechanism—operating at the generation stage. Paladin achieves this through multi-strategy textual insertion and lightweight model instrumentation, ensuring minimal inference latency and deployment feasibility. Contribution/Results: Evaluated across four representative phishing scenarios, Paladin achieves >90% detection accuracy—substantially outperforming state-of-the-art baselines—while maintaining strong stealthiness, robustness against adversarial perturbations, and scalability to large-scale deployments.
📝 Abstract
With the rapid development of large language models, the potential threat of their malicious use, particularly in generating phishing content, is becoming increasingly prevalent. Leveraging the capabilities of LLMs, malicious users can synthesize phishing emails that are free from spelling mistakes and other easily detectable features. Furthermore, such models can generate topic-specific phishing messages, tailoring content to the target domain and increasing the likelihood of success.
Detecting such content remains a significant challenge, as LLM-generated phishing emails often lack clear or distinguishable linguistic features. As a result, most existing semantic-level detection approaches struggle to identify them reliably. While certain LLM-based detection methods have shown promise, they suffer from high computational costs and are constrained by the performance of the underlying language model, making them impractical for large-scale deployment.
In this work, we aim to address this issue. We propose Paladin, which embeds trigger-tag associations into vanilla LLM using various insertion strategies, creating them into instrumented LLMs. When an instrumented LLM generates content related to phishing, it will automatically include detectable tags, enabling easier identification. Based on the design on implicit and explicit triggers and tags, we consider four distinct scenarios in our work. We evaluate our method from three key perspectives: stealthiness, effectiveness, and robustness, and compare it with existing baseline methods. Experimental results show that our method outperforms the baselines, achieving over 90% detection accuracy across all scenarios.