An Empirical Study of LLM-Based Code Clone Detection

📅 2025-11-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing studies lack rigorous cross-distribution generalization analysis of large language models (LLMs) for code clone detection. Method: We systematically evaluate the cross-dataset performance consistency and response stability of five LLMs across seven carefully constructed datasets—covering diverse clone types and difficulty levels via Levenshtein-ratio-based sampling—on CodeNet and BigCloneBench benchmarks, under four prompt templates. We propose a novel metric, “multi-turn submission consistency,” jointly quantifying both F1-score performance and output stability. Contribution/Results: Results reveal severe distributional dependence: e.g., o3-mini achieves 0.943 F1 on CodeNet but suffers a sharp decline on BigCloneBench. Yet most models exhibit >90% response consistency and <0.03 F1 fluctuation, indicating strong intra-dataset robustness coexisting with critical cross-dataset generalization bottlenecks. This work highlights the urgent need for distribution-agnostic evaluation protocols in LLM-based code analysis.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated remarkable capabilities in various software engineering tasks, such as code generation and debugging, because of their ability to translate between programming languages and natural languages. Existing studies have demonstrated the effectiveness of LLMs in code clone detection. However, two crucial issues remain unaddressed: the ability of LLMs to achieve comparable performance across different datasets and the consistency of LLMs' responses in code clone detection. To address these issues, we constructed seven code clone datasets and then evaluated five LLMs in four existing prompts with these datasets. The datasets were created by sampling code pairs using their Levenshtein ratio from two different code collections, CodeNet and BigCloneBench. Our evaluation revealed that although LLMs perform well in CodeNet-related datasets, with o3-mini achieving a 0.943 F1 score, their performance significantly decreased in BigCloneBench-related datasets. Most models achieved a high response consistency, with over 90% of judgments remaining consistent across all five submissions. The fluctuations of the F1 score affected by inconsistency are also tiny; their variations are less than 0.03.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' cross-dataset generalization in code clone detection
Assessing response consistency of LLMs for clone identification tasks
Comparing performance variations across different code collections and prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated five LLMs using four existing prompts
Constructed seven datasets from CodeNet and BigCloneBench
Assessed performance consistency across different code collections
🔎 Similar Papers
No similar papers found.
W
Wenqing Zhu
Nagoya University
Norihiro Yoshida
Norihiro Yoshida
Ritsumeikan University
RefactoringSoftware ClonesSoftware EngineeringSoftware MaintenanceMining Software Repositories
E
Eunjong Choi
Kyoto Institute of Technology
Y
Yutaka Matsubara
Nagoya University
H
Hiroaki Takada
Nagoya University