🤖 AI Summary
This work addresses the vulnerability of large language models (LLMs) to stealthy prompt injection attacks in phishing website detection, where adversaries embed human-imperceptible yet model-interpretable instructions to manipulate classification outcomes. The study presents the first systematic characterization of prompt injection threats in multimodal LLM-based phishing detection and introduces a two-dimensional taxonomy grounded in attack techniques and attack surfaces. To counter these threats, the authors propose InjectDefuser, an integrated defense framework that combines prompt hardening, whitelist-based retrieval augmentation, and output validation. Empirical evaluations across multiple state-of-the-art LLMs, including GPT-5, demonstrate that InjectDefuser substantially reduces attack success rates and significantly enhances the robustness and reliability of phishing detection systems.
📝 Abstract
Phishing sites continue to grow in volume and sophistication. Recent work leverages large language models (LLMs) to analyze URLs, HTML, and rendered content to decide whether a website is a phishing site. While these approaches are promising, LLMs are inherently vulnerable to prompt injection (PI). Because attackers can fully control various elements of phishing sites, this creates the potential for PI that exploits the perceptual asymmetry between LLMs and humans: instructions imperceptible to end users can still be parsed by the LLM and can stealthily manipulate its judgment. The specific risks of PI in phishing detection and effective mitigation strategies remain largely unexplored. This paper presents the first comprehensive evaluation of PI against multimodal LLM-based phishing detection. We introduce a two-dimensional taxonomy, defined by Attack Techniques and Attack Surfaces, that captures realistic PI strategies. Using this taxonomy, we implement diverse attacks and empirically study several representative LLM-based detection systems. The results show that phishing detection with state-of-the-art models such as GPT-5 remains vulnerable to PI. We then propose InjectDefuser, a defense framework that combines prompt hardening, allowlist-based retrieval augmentation, and output validation. Across multiple models, InjectDefuser significantly reduces attack success rates. Our findings clarify the PI risk landscape and offer practical defenses that improve the reliability of next-generation phishing countermeasures.