🤖 AI Summary
Existing studies lack rigorous cross-distribution generalization analysis of large language models (LLMs) for code clone detection. Method: We systematically evaluate the cross-dataset performance consistency and response stability of five LLMs across seven carefully constructed datasets—covering diverse clone types and difficulty levels via Levenshtein-ratio-based sampling—on CodeNet and BigCloneBench benchmarks, under four prompt templates. We propose a novel metric, “multi-turn submission consistency,” jointly quantifying both F1-score performance and output stability. Contribution/Results: Results reveal severe distributional dependence: e.g., o3-mini achieves 0.943 F1 on CodeNet but suffers a sharp decline on BigCloneBench. Yet most models exhibit >90% response consistency and <0.03 F1 fluctuation, indicating strong intra-dataset robustness coexisting with critical cross-dataset generalization bottlenecks. This work highlights the urgent need for distribution-agnostic evaluation protocols in LLM-based code analysis.
📝 Abstract
Large language models (LLMs) have demonstrated remarkable capabilities in various software engineering tasks, such as code generation and debugging, because of their ability to translate between programming languages and natural languages. Existing studies have demonstrated the effectiveness of LLMs in code clone detection. However, two crucial issues remain unaddressed: the ability of LLMs to achieve comparable performance across different datasets and the consistency of LLMs' responses in code clone detection. To address these issues, we constructed seven code clone datasets and then evaluated five LLMs in four existing prompts with these datasets. The datasets were created by sampling code pairs using their Levenshtein ratio from two different code collections, CodeNet and BigCloneBench. Our evaluation revealed that although LLMs perform well in CodeNet-related datasets, with o3-mini achieving a 0.943 F1 score, their performance significantly decreased in BigCloneBench-related datasets. Most models achieved a high response consistency, with over 90% of judgments remaining consistent across all five submissions. The fluctuations of the F1 score affected by inconsistency are also tiny; their variations are less than 0.03.