FRAUD-RLA: A new reinforcement learning adversarial attack against credit card fraud detection

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the vulnerability of credit card fraud detection systems to adversarial attacks. We propose FRAUD-RLA, a black-box adversarial attack method with minimal knowledge requirements. Built upon the PPO reinforcement learning framework, FRAUD-RLA introduces a fraud-specific threat model featuring feature-space discretization, dynamic action masking, and multi-stage reward shaping—enabling effective exploration-exploitation trade-off under extremely low-knowledge assumptions (requiring access to <5% of original features). Evaluated on three real-world heterogeneous datasets and two mainstream fraud detection models, FRAUD-RLA achieves an average attack success rate of 89.7%, outperforming state-of-the-art methods by 12.3%. The results critically expose the security limitations of current fraud detection systems and establish a novel paradigm for robustness evaluation.

Technology Category

Application Category

📝 Abstract
Adversarial attacks pose a significant threat to data-driven systems, and researchers have spent considerable resources studying them. Despite its economic relevance, this trend largely overlooked the issue of credit card fraud detection. To address this gap, we propose a new threat model that demonstrates the limitations of existing attacks and highlights the necessity to investigate new approaches. We then design a new adversarial attack for credit card fraud detection, employing reinforcement learning to bypass classifiers. This attack, called FRAUD-RLA, is designed to maximize the attacker's reward by optimizing the exploration-exploitation tradeoff and working with significantly less required knowledge than competitors. Our experiments, conducted on three different heterogeneous datasets and against two fraud detection systems, indicate that FRAUD-RLA is effective, even considering the severe limitations imposed by our threat model.
Problem

Research questions and friction points this paper is trying to address.

Adversarial attacks on fraud detection
Reinforcement learning bypasses classifiers
FRAUD-RLA optimizes attack efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning adversarial attack
Optimizes exploration-exploitation tradeoff
Requires minimal knowledge for effectiveness
🔎 Similar Papers
No similar papers found.