Clouding the Mirror: Stealthy Prompt Injection Attacks Targeting LLM-based Phishing Detection

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of large language models (LLMs) to stealthy prompt injection attacks in phishing website detection, where adversaries embed human-imperceptible yet model-interpretable instructions to manipulate classification outcomes. The study presents the first systematic characterization of prompt injection threats in multimodal LLM-based phishing detection and introduces a two-dimensional taxonomy grounded in attack techniques and attack surfaces. To counter these threats, the authors propose InjectDefuser, an integrated defense framework that combines prompt hardening, whitelist-based retrieval augmentation, and output validation. Empirical evaluations across multiple state-of-the-art LLMs, including GPT-5, demonstrate that InjectDefuser substantially reduces attack success rates and significantly enhances the robustness and reliability of phishing detection systems.

Technology Category

Application Category

📝 Abstract
Phishing sites continue to grow in volume and sophistication. Recent work leverages large language models (LLMs) to analyze URLs, HTML, and rendered content to decide whether a website is a phishing site. While these approaches are promising, LLMs are inherently vulnerable to prompt injection (PI). Because attackers can fully control various elements of phishing sites, this creates the potential for PI that exploits the perceptual asymmetry between LLMs and humans: instructions imperceptible to end users can still be parsed by the LLM and can stealthily manipulate its judgment. The specific risks of PI in phishing detection and effective mitigation strategies remain largely unexplored. This paper presents the first comprehensive evaluation of PI against multimodal LLM-based phishing detection. We introduce a two-dimensional taxonomy, defined by Attack Techniques and Attack Surfaces, that captures realistic PI strategies. Using this taxonomy, we implement diverse attacks and empirically study several representative LLM-based detection systems. The results show that phishing detection with state-of-the-art models such as GPT-5 remains vulnerable to PI. We then propose InjectDefuser, a defense framework that combines prompt hardening, allowlist-based retrieval augmentation, and output validation. Across multiple models, InjectDefuser significantly reduces attack success rates. Our findings clarify the PI risk landscape and offer practical defenses that improve the reliability of next-generation phishing countermeasures.
Problem

Research questions and friction points this paper is trying to address.

prompt injection
phishing detection
large language models
adversarial attacks
security
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prompt Injection
Phishing Detection
Large Language Models
Multimodal LLMs
InjectDefuser
🔎 Similar Papers
No similar papers found.
T
Takashi Koide
NTT Security Holdings Corporation & NTT, Inc.
H
Hiroki Nakano
NTT Security Holdings Corporation & NTT, Inc.
Daiki Chiba
Daiki Chiba
NTT
Cyber SecurityNetwork SecurityInternet Measurement