Influence-Guided Concolic Testing of Transformer Robustness

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low efficiency and poor feasibility of concolic testing for Transformer classifiers under small L₀ budgets and deep architectures, this paper proposes an influence-guided concolic testing method. Our approach introduces two key innovations: (i) the first integration of SHAP values into symbolic execution to quantify each path predicate’s influence on the model’s decision, enabling dynamic, influence-aware search guidance; and (ii) a satisfiability-friendly, pure-Python semantic model of multi-head self-attention, coupled with a lightweight path-scheduling heuristic to enhance exploration in deep networks. Experiments demonstrate that our method significantly accelerates label-flipping adversarial example generation and effectively uncovers compact, cross-sample decision logic under tight L₀ constraints. These results validate its effectiveness and practicality for robustness analysis and interpretable debugging of modern Transformer models.

Technology Category

Application Category

📝 Abstract
Concolic testing for deep neural networks alternates concrete execution with constraint solving to search for inputs that flip decisions. We present an {influence-guided} concolic tester for Transformer classifiers that ranks path predicates by SHAP-based estimates of their impact on the model output. To enable SMT solving on modern architectures, we prototype a solver-compatible, pure-Python semantics for multi-head self-attention and introduce practical scheduling heuristics that temper constraint growth on deeper models. In a white-box study on compact Transformers under small $L_0$ budgets, influence guidance finds label-flip inputs more efficiently than a FIFO baseline and maintains steady progress on deeper networks. Aggregating successful attack instances with a SHAP-based critical decision path analysis reveals recurring, compact decision logic shared across attacks. These observations suggest that (i) influence signals provide a useful search bias for symbolic exploration, and (ii) solver-friendly attention semantics paired with lightweight scheduling make concolic testing feasible for contemporary Transformer models, offering potential utility for debugging and model auditing.
Problem

Research questions and friction points this paper is trying to address.

Improving Transformer robustness testing via influence-guided concolic search
Enabling SMT solving for multi-head self-attention architectures
Identifying compact decision patterns through critical path analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Influence-guided concolic testing ranks path predicates
Solver-compatible Python semantics enables SMT solving
Lightweight scheduling heuristics temper constraint growth
🔎 Similar Papers
No similar papers found.