Amplifying Machine Learning Attacks Through Strategic Compositions

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Machine learning models face diverse security threats, yet existing research typically analyzes individual attack types in isolation, overlooking the realistic risk of adversaries strategically combining multiple attacks. Method: This paper systematically investigates strategic attack composition for the first time, proposing four interaction patterns among inference-stage attacks and establishing the first taxonomy of composite attacks. Leveraging a “preparation–execution–evaluation” framework, it integrates techniques for adversarial example generation, attribute inference, membership inference, and property inference. Empirical evaluation across three models and three image datasets demonstrates cross-attack enhancement effects—e.g., attribute inference significantly boosts membership inference success rates. Contribution/Results: Findings reveal that multi-attack collaboration amplifies joint threats to model privacy and robustness. To foster reproducible research, the authors open-source COAT—a modular toolkit enabling systematic study of composite attacks—thereby introducing a new paradigm and benchmark for ML security.

Technology Category

Application Category

📝 Abstract
Machine learning (ML) models are proving to be vulnerable to a variety of attacks that allow the adversary to learn sensitive information, cause mispredictions, and more. While these attacks have been extensively studied, current research predominantly focuses on analyzing each attack type individually. In practice, however, adversaries may employ multiple attack strategies simultaneously rather than relying on a single approach. This prompts a crucial yet underexplored question: When the adversary has multiple attacks at their disposal, are they able to mount or amplify the effect of one attack with another? In this paper, we take the first step in studying the strategic interactions among different attacks, which we define as attack compositions. Specifically, we focus on four well-studied attacks during the model's inference phase: adversarial examples, attribute inference, membership inference, and property inference. To facilitate the study of their interactions, we propose a taxonomy based on three stages of the attack pipeline: preparation, execution, and evaluation. Using this taxonomy, we identify four effective attack compositions, such as property inference assisting attribute inference at its preparation level and adversarial examples assisting property inference at its execution level. We conduct extensive experiments on the attack compositions using three ML model architectures and three benchmark image datasets. Empirical results demonstrate the effectiveness of these four attack compositions. We implement and release a modular reusable toolkit, COAT. Arguably, our work serves as a call for researchers and practitioners to consider advanced adversarial settings involving multiple attack strategies, aiming to strengthen the security and robustness of AI systems.
Problem

Research questions and friction points this paper is trying to address.

Study interactions among multiple ML attack strategies
Analyze amplification effects of combined attack compositions
Enhance AI security against advanced adversarial settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Strategic compositions of multiple ML attacks
Taxonomy for attack pipeline stages
Modular toolkit COAT for attack compositions
🔎 Similar Papers
No similar papers found.
Yugeng Liu
Yugeng Liu
Ph.D. Candidate, CISPA Helmholtz Center for Information Security
Computer Security
Z
Zheng Li
Shandong University
H
Hai Huang
CISPA Helmholtz Center for Information Security
Michael Backes
Michael Backes
Chairman and Founding Director of the CISPA Helmholtz Center for Information Security
SecurityprivacycryptographyAI
Y
Yang Zhang
CISPA Helmholtz Center for Information Security