SAEL: Leveraging Large Language Models with Adaptive Mixture-of-Experts for Smart Contract Vulnerability Detection

📅 2025-07-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing smart contract vulnerability detection methods suffer from three key limitations: poor generalizability of static analysis, weak adaptability of domain-specific pretrained models, and insufficient precision of general-purpose large language models (LLMs) on fine-grained vulnerability types. To address these challenges, we propose SAEL—a novel framework that deeply integrates LLMs with an adaptive Mixture-of-Experts (MoE) system. SAEL leverages prompt engineering to elicit interpretable vulnerability judgments and explanatory features from the LLM; employs a learnable gating network to dynamically weight and fuse heterogeneous representations—including code embeddings (CodeT5/T5), prediction confidence scores, and explanation-derived features; and applies Top-K filtering, softmax normalization, and multi-head self-attention to enable synergistic optimization across multi-source features. Extensive experiments demonstrate that SAEL achieves state-of-the-art performance across diverse vulnerability categories, simultaneously delivering strong generalization capability and high fine-grained detection accuracy.

Technology Category

Application Category

📝 Abstract
With the increasing security issues in blockchain, smart contract vulnerability detection has become a research focus. Existing vulnerability detection methods have their limitations: 1) Static analysis methods struggle with complex scenarios. 2) Methods based on specialized pre-trained models perform well on specific datasets but have limited generalization capabilities. In contrast, general-purpose Large Language Models (LLMs) demonstrate impressive ability in adapting to new vulnerability patterns. However, they often underperform on specific vulnerability types compared to methods based on specialized pre-trained models. We also observe that explanations generated by general-purpose LLMs can provide fine-grained code understanding information, contributing to improved detection performance. Inspired by these observations, we propose SAEL, an LLM-based framework for smart contract vulnerability detection. We first design targeted prompts to guide LLMs in identifying vulnerabilities and generating explanations, which serve as prediction features. Next, we apply prompt-tuning on CodeT5 and T5 to process contract code and explanations, enhancing task-specific performance. To combine the strengths of each approach, we introduce an Adaptive Mixture-of-Experts architecture. This dynamically adjusts feature weights via a Gating Network, which selects relevant features using TopK filtering and Softmax normalization, and incorporates a Multi-Head Self-Attention mechanism to enhance cross-feature relationships. This design enables effective integration of LLM predictions, explanation features, and code features through gradient optimization. The loss function jointly considers both independent feature performance and overall weighted predictions. Experiments show that SAEL outperforms existing methods across various vulnerabilities.
Problem

Research questions and friction points this paper is trying to address.

Detect smart contract vulnerabilities using adaptive LLM mixtures
Overcome limitations of static analysis and specialized pre-trained models
Enhance detection via explanation features and dynamic weight adjustment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Targeted prompts guide LLMs for vulnerability detection
Prompt-tuning on CodeT5 and T5 enhances performance
Adaptive Mixture-of-Experts dynamically adjusts feature weights
🔎 Similar Papers
No similar papers found.
L
Lei Yu
Institute of Software, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
S
Shiqi Cheng
Institute of Software, Chinese Academy of Sciences, Beijing, China
Zhirong Huang
Zhirong Huang
SLAC and Stanford University
Accelerator PhysicsFree Electron Lasers
J
Jingyuan Zhang
Institute of Software, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
C
Chenjie Shen
University of Chinese Academy of Sciences, Beijing, China
J
Junyi Lu
Institute of Software, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
L
Li Yang
Institute of Software, Chinese Academy of Sciences, Beijing, China
F
Fengjun Zhang
Institute of Software, Chinese Academy of Sciences, Beijing, China; State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China
J
Jiajia Ma
Institute of Software, Chinese Academy of Sciences, Beijing, China