TriFusion-LLM: Prior-Guided Multimodal Fusion with LLM Arbitration for Fine-grained Code Clone Detection

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing code clone detection methods, which are predominantly confined to binary classification and struggle to distinguish among the seven fine-grained clone types in BigCloneBench. To overcome this challenge, we propose a prior-guided multimodal fusion framework that jointly integrates heuristic similarity priors, abstract syntax tree structural features, and CodeBERT semantic embeddings. Furthermore, we introduce a large language model (LLM)-based selective arbitration mechanism that performs inference augmentation only on high-uncertainty samples—approximately 0.2% of the total—thereby maintaining low computational overhead. Our approach achieves a significant improvement in Macro-F1 score on the seven-class BigCloneBench task, raising it from 0.695 to 0.878, thus effectively balancing performance and efficiency.

Technology Category

Application Category

📝 Abstract
Code clone detection (CCD) supports software maintenance, refactoring, and security analysis. Although pre-trained models capture code semantics, most work reduces CCD to binary classification, overlooking the heterogeneity of clone types and the seven fine-grained categories in BigCloneBench. We present Full Model, a multimodal fusion framework that jointly integrates heuristic similarity priors from classical machine learning, structural signals from abstract syntax trees (ASTs), and deep semantic embeddings from CodeBERT into a single predictor. By fusing structural, statistical, and semantic representations, Full Model improves discrimination among fine-grained clone types while keeping inference cost practical. On the seven-class BigCloneBench benchmark, Full Model raises Macro-F1 from 0.695 to 0.875. Ablation studies show that using the primary model's probability distribution as a prior to guide selective arbitration by a large language model (LLM) substantially outperforms blind reclassification; arbitrating only ~0.2% of high-uncertainty samples yields an additional 0.3 absolute Macro-F1 gain. Overall, Full Model achieves an effective performance-cost trade-off for fine-grained CCD and offers a practical solution for large-scale industrial deployment.
Problem

Research questions and friction points this paper is trying to address.

code clone detection
fine-grained classification
multimodal fusion
BigCloneBench
clone type heterogeneity
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal fusion
LLM arbitration
code clone detection
fine-grained classification
prior-guided learning
🔎 Similar Papers
No similar papers found.