🤖 AI Summary
Large language models (LLMs) exhibit inconsistent and unstable performance in source code vulnerability detection, particularly under class imbalance and multi-class settings.
Method: This paper proposes Dynamic Gating Stacking (DGS), a novel stacking ensemble framework inspired by Mixture of Experts (MoE), which adaptively fuses predictions from multiple LLMs while explicitly modeling class imbalance and multi-class characteristics.
Contribution/Results: Evaluated on Devign, ReVeal, and BigVul benchmarks, DGS significantly improves F1-score and AUC over conventional Bagging, Boosting, and standard Stacking. Empirical analysis reveals that Boosting excels in highly imbalanced scenarios, whereas DGS consistently outperforms standard Stacking across all configurations—validating its gating mechanism’s efficacy in harmonizing heterogeneous model outputs. This work establishes a more robust and scalable ensemble paradigm for LLM-driven vulnerability detection.
📝 Abstract
Code vulnerability detection is crucial for ensuring the security and reliability of modern software systems. Recently, Large Language Models (LLMs) have shown promising capabilities in this domain. However, notable discrepancies in detection results often arise when analyzing identical code segments across different training stages of the same model or among architecturally distinct LLMs. While such inconsistencies may compromise detection stability, they also highlight a key opportunity: the latent complementarity among models can be harnessed through ensemble learning to create more robust vulnerability detection systems. In this study, we explore the potential of ensemble learning to enhance the performance of LLMs in source code vulnerability detection. We conduct comprehensive experiments involving five LLMs (i.e., DeepSeek-Coder-6.7B, CodeLlama-7B, CodeLlama-13B, CodeQwen1.5-7B, and StarCoder2-15B), using three ensemble strategies (i.e., Bagging, Boosting, and Stacking). These experiments are carried out across three widely adopted datasets (i.e., Devign, ReVeal, and BigVul). Inspired by Mixture of Experts (MoE) techniques, we further propose Dynamic Gated Stacking (DGS), a Stacking variant tailored for vulnerability detection. Our results demonstrate that ensemble approaches can significantly improve detection performance, with Boosting excelling in scenarios involving imbalanced datasets. Moreover, DGS consistently outperforms traditional Stacking, particularly in handling class imbalance and multi-class classification tasks. These findings offer valuable insights into building more reliable and effective LLM-based vulnerability detection systems through ensemble learning.