Adversarial Agents: Black-Box Evasion Attacks with Reinforcement Learning

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the low query efficiency and limited success rate of black-box model evasion attacks by being the first to systematically integrate reinforcement learning (RL) into adversarial example generation. We formulate the attack process as a Markov decision process, design state and action spaces to encode image perturbations and oracle feedback, and employ the Proximal Policy Optimization (PPO) algorithm to enable continuous policy improvement and experience reuse. The proposed method supports controllable perturbations and self-evolving attack policies, significantly enhancing both query efficiency and robustness. On CIFAR-10, it achieves a 19.4% higher attack success rate and reduces the average queries per sample by 53.2% compared to baseline methods. After 5,000 training iterations, its success rate surpasses that of SquareAttack by 13.1%.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) offers powerful techniques for solving complex sequential decision-making tasks from experience. In this paper, we demonstrate how RL can be applied to adversarial machine learning (AML) to develop a new class of attacks that learn to generate adversarial examples: inputs designed to fool machine learning models. Unlike traditional AML methods that craft adversarial examples independently, our RL-based approach retains and exploits past attack experience to improve future attacks. We formulate adversarial example generation as a Markov Decision Process and evaluate RL's ability to (a) learn effective and efficient attack strategies and (b) compete with state-of-the-art AML. On CIFAR-10, our agent increases the success rate of adversarial examples by 19.4% and decreases the median number of victim model queries per adversarial example by 53.2% from the start to the end of training. In a head-to-head comparison with a state-of-the-art image attack, SquareAttack, our approach enables an adversary to generate adversarial examples with 13.1% more success after 5000 episodes of training. From a security perspective, this work demonstrates a powerful new attack vector that uses RL to attack ML models efficiently and at scale.
Problem

Research questions and friction points this paper is trying to address.

Develops RL-based adversarial attacks on machine learning models.
Improves attack success rates and reduces victim model queries.
Demonstrates RL's effectiveness in generating adversarial examples efficiently.
Innovation

Methods, ideas, or system contributions that make the work stand out.

RL-based adversarial example generation
Exploits past attack experience
Increases success rate, reduces queries
🔎 Similar Papers
No similar papers found.