Parameter Matching Attack: Enhancing Practical Applicability of Availability Attacks

πŸ“… 2024-07-02
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing availability attacks suffer from limited effectiveness when only a subset of training samples can be perturbed. This paper proposes Parameter Matching Attack (PMA), a novel poisoning-based availability attack that, under the realistic constraint where data owners can only perturb locally available samples, employs gradient-based optimization to align the parameter space of the mixed-training model with a pre-specified low-performance target modelβ€”thereby significantly degrading model utility. PMA is the first availability attack explicitly designed for partial-data perturbation; it pioneers the integration of parameter-space alignment into availability attacks and explicitly models the dynamics of mixed training. Evaluated on four benchmark datasets, PMA achieves over 40% accuracy degradation using perturbations on only 10%–30% of training samples, substantially outperforming state-of-the-art methods.

Technology Category

Application Category

πŸ“ Abstract
The widespread use of personal data for training machine learning models raises significant privacy concerns, as individuals have limited control over how their public data is subsequently utilized. Availability attacks have emerged as a means for data owners to safeguard their data by desning imperceptible perturbations that degrade model performance when incorporated into training datasets. However, existing availability attacks exhibit limitations in practical applicability, particularly when only a portion of the data can be perturbed. To address this challenge, we propose a novel availability attack approach termed Parameter Matching Attack (PMA). PMA is the first availability attack that works when only a portion of data can be perturbed. PMA optimizes perturbations so that when the model is trained on a mixture of clean and perturbed data, the resulting model will approach a model designed to perform poorly. Experimental results across four datasets demonstrate that PMA outperforms existing methods, achieving significant model performance degradation when a part of the training data is perturbed. Our code is available in the supplementary.
Problem

Research questions and friction points this paper is trying to address.

Addresses ineffectiveness of availability attacks with partial data perturbation.
Proposes Parameter Matching Attack (PMA) for significant model performance drop.
Optimizes perturbations to degrade model performance using mixed clean and perturbed data.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter Matching Attack optimizes partial data perturbations
PMA achieves over 30% model performance drop
PMA outperforms existing methods in partial perturbation scenarios
πŸ”Ž Similar Papers
No similar papers found.