Evaluating the Vulnerability of ML-Based Ethereum Phishing Detectors to Single-Feature Adversarial Perturbations

📅 2025-04-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper reveals the severe vulnerability of Ethereum phishing transaction detection models to single-feature adversarial perturbations—merely fine-tuning one critical feature (e.g., transaction frequency or gas consumption) reduces the AUC of mainstream ML models by over 60% on average. Method: We systematically evaluate 12 adversarial attack strategies across five detection algorithms and three robustness-enhancement approaches: adversarial training, robust feature selection, and ensemble modeling. Contribution/Results: We identify significant heterogeneity in algorithmic vulnerability, enabling targeted defense design. We propose a novel synergistic defense framework combining adversarial training with robust feature selection, which—while preserving detection accuracy—reduces AUC degradation by 62% on average. This work is the first to empirically validate the practical threat posed by minimalist adversarial attacks in blockchain security and to demonstrate a low-cost, effective mitigation pathway.

Technology Category

Application Category

📝 Abstract
This paper explores the vulnerability of machine learning models to simple single-feature adversarial attacks in the context of Ethereum fraudulent transaction detection. Through comprehensive experimentation, we investigate the impact of various adversarial attack strategies on model performance metrics. Our findings, highlighting how prone those techniques are to simple attacks, are alarming, and the inconsistency in the attacks' effect on different algorithms promises ways for attack mitigation. We examine the effectiveness of different mitigation strategies, including adversarial training and enhanced feature selection, in enhancing model robustness and show their effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Assessing vulnerability of ML-based Ethereum phishing detectors to single-feature adversarial attacks
Investigating impact of adversarial strategies on model performance metrics
Evaluating mitigation techniques like adversarial training for improved robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Single-feature adversarial attacks on ML models
Adversarial training for robustness enhancement
Improved feature selection to mitigate attacks
A
Ahod Alghuried
University of Central Florida, USA
A
Ali Alkinoon
University of Central Florida, USA
A
Abdulaziz Alghamdi
University of Central Florida, USA
Soohyeon Choi
Soohyeon Choi
University of Central Florida
AILLMsSecurityCode AuthorshipPrivacy
M
M. Mohaisen
Northeastern Illinois University, USA
D
D. Mohaisen
University of Central Florida, USA