Explainer-guided Targeted Adversarial Attacks against Binary Code Similarity Detection Models

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing adversarial attack methods against binary code similarity detection (BCSD) models predominantly rely on heuristic or greedy search strategies, suffering from weak theoretical foundations, low efficiency, and limited capability to achieve targeted attacks. Method: We propose an interpreter-guided targeted adversarial attack framework. It is the first to incorporate model-agnostic black-box explainers (e.g., LIME/SHAP variants) into BCSD attacks, leveraging decision-boundary interpretability to precisely identify critical code segments. The framework integrates gradient-free optimization with semantics-preserving binary rewriting—via instruction substitution, insertion, and reordering—and employs a tailored directional loss function. Results: Our method achieves state-of-the-art attack success rates across diverse BCSD models, reduces average attack time by 47%, improves cross-model transferability by 3.2×, and attains an 89.6% evasion rate in realistic vulnerability detection scenarios.

Technology Category

Application Category

📝 Abstract
Binary code similarity detection (BCSD) serves as a fundamental technique for various software engineering tasks, e.g., vulnerability detection and classification. Attacks against such models have therefore drawn extensive attention, aiming at misleading the models to generate erroneous predictions. Prior works have explored various approaches to generating semantic-preserving variants, i.e., adversarial samples, to evaluate the robustness of the models against adversarial attacks. However, they have mainly relied on heuristic criteria or iterative greedy algorithms to locate salient code influencing the model output, failing to operate on a solid theoretical basis. Moreover, when processing programs with high complexities, such attacks tend to be time-consuming. In this work, we propose a novel optimization for adversarial attacks against BCSD models. In particular, we aim to improve the attacks in a challenging scenario, where the attack goal is to limit the model predictions to a specific range, i.e., the targeted attacks. Our attack leverages the superior capability of black-box, model-agnostic explainers in interpreting the model decision boundaries, thereby pinpointing the critical code snippet to apply semantic-preserving perturbations. The evaluation results demonstrate that compared with the state-of-the-art attacks, the proposed attacks achieve higher attack success rate in almost all scenarios, while also improving the efficiency and transferability. Our real-world case studies on vulnerability detection and classification further demonstrate the security implications of our attacks, highlighting the urgent need to further enhance the robustness of existing BCSD models.
Problem

Research questions and friction points this paper is trying to address.

Targeted adversarial attacks on binary code similarity detection models
Improving attack efficiency and success rate using explainers
Enhancing robustness of vulnerability detection and classification models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages black-box explainers for model interpretation
Targets critical code snippets with semantic perturbations
Improves attack success rate and efficiency significantly
Mingjie Chen
Mingjie Chen
KU Leuven
isogeny-based cryptographyalgorithmic number theory
T
Tiancheng Zhu
Huazhong University of Science and Technology
M
Mingxue Zhang
The State Key Laboratory of Blockchain and Data Security, Zhejiang University
Yiling He
Yiling He
Research Fellow @University College London; PhD @Zhejiang University
Software SecurityTrustworthy AICode LLMModel Explainability
M
Minghao Lin
University of Southern California
P
Penghui Li
Columbia University
Kui Ren
Kui Ren
Professor and Dean of Computer Science, Zhejiang University, ACM/IEEE Fellow
Data Security & PrivacyAI SecurityIoT & Vehicular Security