Can Large Language Models Improve Phishing Defense? A Large-Scale Controlled Experiment on Warning Dialogue Explanations

📅 2025-07-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Phishing attacks exploit human cognitive vulnerabilities; conventional warning dialogs suffer from poor explanatory clarity and static content, limiting their effectiveness. This study presents the first systematic empirical evaluation of large language model (LLM)-generated phishing warning explanations—specifically feature-based and counterfactual explanations—produced by Claude 3.5 Sonnet and Llama 3.3 70B. In a between-subjects experiment with N=750 participants, we measured click-through rate, risk perception, trust, and false-positive rate. Results demonstrate that LLM-generated explanations match or surpass manually authored warnings in reducing click-through rates, with Claude 3.5 Sonnet achieving the strongest performance. Moreover, explanation style significantly modulates the trade-off between true phishing detection and false positives, enabling simultaneous improvements in interpretability and false-positive suppression. This work establishes an evidence-based foundation and design paradigm for scalable, human-centered intelligent cybersecurity defenses.

Technology Category

Application Category

📝 Abstract
Phishing has become a prominent risk in modern cybersecurity, often used to bypass technological defences by exploiting predictable human behaviour. Warning dialogues are a standard mitigation measure, but the lack of explanatory clarity and static content limits their effectiveness. In this paper, we report on our research to assess the capacity of Large Language Models (LLMs) to generate clear, concise, and scalable explanations for phishing warnings. We carried out a large-scale between-subjects user study (N = 750) to compare the influence of warning dialogues supplemented with manually generated explanations against those generated by two LLMs, Claude 3.5 Sonnet and Llama 3.3 70B. We investigated two explanatory styles (feature-based and counterfactual) for their effects on behavioural metrics (click-through rate) and perceptual outcomes (e.g., trust, risk, clarity). The results indicate that well-constructed LLM-generated explanations can equal or surpass manually crafted explanations in reducing susceptibility to phishing; Claude-generated warnings exhibited particularly robust performance. Feature-based explanations were more effective for genuine phishing attempts, whereas counterfactual explanations diminished false-positive rates. Other variables such as workload, gender, and prior familiarity with warning dialogues significantly moderated warning effectiveness. These results indicate that LLMs can be used to automatically build explanations for warning users against phishing, and that such solutions are scalable, adaptive, and consistent with human-centred values.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs' ability to generate clear phishing warning explanations
Comparing manual vs LLM-generated explanations for phishing warnings
Evaluating explanatory styles' impact on phishing susceptibility metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs generate clear phishing warning explanations
Feature-based and counterfactual styles tested
Claude 3.5 outperforms manual explanations
🔎 Similar Papers
No similar papers found.