🤖 AI Summary
Existing score-based black-box attacks are limited to top-1 single-label settings, suffer from low success rates and poor query efficiency under small perturbations, and lack systematic investigation of top-K vulnerability in multi-label classifiers.
Method: We propose the first proxy-free, top-K–aware geometric score-based black-box attack. It introduces a novel geometric modeling of top-K decision boundaries, integrates boundary-point initialization for gradient estimation, and employs iterative perturbation optimization—unifying untargeted and targeted attacks while supporting both single-label and multi-label classifiers.
Results: Experiments on ImageNet and PASCAL VOC demonstrate that our method significantly improves attack success rate and query efficiency under strict L₂/L∞ perturbation constraints, outperforming state-of-the-art top-1 methods. Moreover, it provides the first empirical evidence of structural top-K vulnerability in multi-label models.
📝 Abstract
Existing score-based adversarial attacks mainly focus on crafting $top$-1 adversarial examples against classifiers with single-label classification. Their attack success rate and query efficiency are often less than satisfactory, particularly under small perturbation requirements; moreover, the vulnerability of classifiers with multi-label learning is yet to be studied. In this paper, we propose a comprehensive surrogate free score-based attack, named geometric score-based black-box attack (GSBAK$^K$), to craft adversarial examples in an aggressive $top$-$K$ setting for both untargeted and targeted attacks, where the goal is to change the $top$-$K$ predictions of the target classifier. We introduce novel gradient-based methods to find a good initial boundary point to attack. Our iterative method employs novel gradient estimation techniques, particularly effective in $top$-$K$ setting, on the decision boundary to effectively exploit the geometry of the decision boundary. Additionally, GSBAK$^K$ can be used to attack against classifiers with $top$-$K$ multi-label learning. Extensive experimental results on ImageNet and PASCAL VOC datasets validate the effectiveness of GSBAK$^K$ in crafting $top$-$K$ adversarial examples.