Evaluating Large Language Models for Phishing Detection, Self-Consistency, Faithfulness, and Explainability

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Phishing email detection faces a fundamental trade-off between predictive accuracy and interpretability. To address this, we propose a prediction–explanation alignment evaluation framework and introduce, for the first time, a consistency metric—CC-SHAP—based on SHAP values to quantify the intrinsic alignment between large language model (LLM) predictions and their generated explanations. We adapt Transformer-based models—including BERT, Llama, and Wizard—to the phishing detection domain via binary classification, contrastive learning, and direct preference optimization. Experimental results reveal a clear performance–interpretability tension: Llama variants achieve the highest CC-SHAP scores but lowest accuracy; Wizard attains peak accuracy yet exhibits the weakest explanation consistency; BERT offers a balanced compromise. These findings empirically substantiate an inherent trade-off between fidelity and faithfulness in LLM-driven phishing detection. Our work establishes a reproducible, quantifiable evaluation paradigm grounded in explainable AI principles, advancing trustworthy deployment of LLMs for security-critical applications.

Technology Category

Application Category

📝 Abstract
Phishing attacks remain one of the most prevalent and persistent cybersecurity threat with attackers continuously evolving and intensifying tactics to evade the general detection system. Despite significant advances in artificial intelligence and machine learning, faithfully reproducing the interpretable reasoning with classification and explainability that underpin phishing judgments remains challenging. Due to recent advancement in Natural Language Processing, Large Language Models (LLMs) show a promising direction and potential for improving domain specific phishing classification tasks. However, enhancing the reliability and robustness of classification models requires not only accurate predictions from LLMs but also consistent and trustworthy explanations aligning with those predictions. Therefore, a key question remains: can LLMs not only classify phishing emails accurately but also generate explanations that are reliably aligned with their predictions and internally self-consistent? To answer these questions, we have fine-tuned transformer based models, including BERT, Llama models, and Wizard, to improve domain relevance and make them more tailored to phishing specific distinctions, using Binary Sequence Classification, Contrastive Learning (CL) and Direct Preference Optimization (DPO). To that end, we examined their performance in phishing classification and explainability by applying the ConsistenCy measure based on SHAPley values (CC SHAP), which measures prediction explanation token alignment to test the model's internal faithfulness and consistency and uncover the rationale behind its predictions and reasoning. Overall, our findings show that Llama models exhibit stronger prediction explanation token alignment with higher CC SHAP scores despite lacking reliable decision making accuracy, whereas Wizard achieves better prediction accuracy but lower CC SHAP scores.
Problem

Research questions and friction points this paper is trying to address.

Detecting phishing attacks using Large Language Models (LLMs)
Ensuring self-consistent and faithful phishing classification explanations)
Improving explainability and reliability of LLM-based phishing detection)
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned transformer models for phishing detection
Applied Binary Sequence Classification and Contrastive Learning
Used CC SHAP for explanation token alignment
🔎 Similar Papers
No similar papers found.