🤖 AI Summary
To address the low precision of conventional methods in detecting semantic clones (Type-4)—code fragments that are functionally equivalent but syntactically dissimilar—this paper proposes a two-stage semantic clone detection framework. In the first stage, a large language model (LLM) performs fine-grained semantic similarity filtering; in the second stage, the same LLM automatically generates diverse test inputs, enabling cross-dynamic execution and output comparison to rigorously verify functional equivalence. By synergistically integrating deep semantic understanding with empirical execution-based validation, the framework overcomes the limitations of purely static analysis and the unreliability of LLM-only judgment. Evaluated on Python programs, our approach significantly outperforms baseline methods, achieving improvements of +18.3% in precision, +22.7% in recall, and +20.1% in F1-score. This work establishes a novel, efficient, and empirically verifiable paradigm for semantic clone detection.
📝 Abstract
Code clone detection is a critical task in software engineering, aimed at identifying duplicated or similar code fragments within or across software systems. Traditional methods often fail to capture functional equivalence, particularly for semantic clones (Type 4), where code fragments implement identical functionality despite differing syntactic structures. Recent advances in large language models (LLMs) have shown promise in understanding code semantics. However, directly applying LLMs to code clone detection yields suboptimal results due to their sensitivity to syntactic differences. To address these challenges, we propose a novel two-stage framework that combines LLM-based screening with execution-based validation for detecting semantic clones in Python programs. In the first stage, an LLM evaluates code pairs to filter out obvious non-clones based on semantic analysis. For pairs not identified as clones, the second stage employs an execution-based validation approach, utilizing LLM-generated test inputs to assess functional equivalence through cross-execution validation. Our experimental evaluation demonstrates significant improvements in precision, recall, and F1-score compared to direct LLM-based detection, highlighting the framework's effectiveness in identifying semantic clones. Future work includes exploring cross-language clone detection and optimizing the framework for large-scale applications.