🤖 AI Summary
Existing security issue report detection methods heavily rely on lexical surface cues, exhibiting poor generalization and difficulty in identifying semantically complex or novel security issues. To address this, we propose a masked language modeling (MLM) pretraining paradigm based on semantic surrogates—lexically distinct but semantically equivalent replacements—that disrupt lexical shortcuts and compel models to learn deep security semantics. Our approach integrates a bidirectional Transformer architecture, MLM fine-tuning, and a deep classification head. Evaluated on a large-scale dataset of over ten thousand GitHub issue reports, our method achieves an F1 score of 0.9880, outperforming the best traditional machine learning baselines by 14.90%–94.72% in F1 and surpassing state-of-the-art LLM-based baselines by 39.49%–74.53% in F1. Crucially, it demonstrates significantly enhanced robustness in detecting previously unseen security patterns.
📝 Abstract
Monitoring issue tracker submissions is a crucial software maintenance activity. A key goal is the prioritization of high risk, security-related bugs. If such bugs can be recognized early, the risk of propagation to dependent products and endangerment of stakeholder benefits can be mitigated. To assist triage engineers with this task, several automatic detection techniques, from Machine Learning (ML) models to prompting Large Language Models (LLMs), have been proposed. Although promising to some extent, prior techniques often memorize lexical cues as decision shortcuts, yielding low detection rate specifically for more complex submissions. As such, these classifiers do not yet reach the practical expectations of a real-time detector of security-related issues. To address these limitations, we propose SEBERTIS, a framework to train Deep Neural Networks (DNNs) as classifiers independent of lexical cues, so that they can confidently detect fully unseen security-related issues. SEBERTIS capitalizes on fine-tuning bidirectional transformer architectures as Masked Language Models (MLMs) on a series of semantically equivalent vocabulary to prediction labels (which we call Semantic Surrogates) when they have been replaced with a mask. Our SEBERTIS-trained classifier achieves a 0.9880 F1-score in detecting security-related issues of a curated corpus of 10,000 GitHub issue reports, substantially outperforming state-of-the-art issue classifiers, with 14.44%-96.98%, 15.40%-93.07%, and 14.90%-94.72% higher detection precision, recall, and F1-score over ML-based baselines. Our classifier also substantially surpasses LLM baselines, with an improvement of 23.20%-63.71%, 36.68%-85.63%, and 39.49%-74.53% for precision, recall, and F1-score.