🤖 AI Summary
To address educational inequity, assessment distortion, and ethical risks arising from the proliferation of LLM-generated code, this paper proposes the first multi-lingual, multi-generator, and multi-domain framework for detecting machine-generated code. Methodologically, it achieves zero-shot generalization across unseen programming languages, generative models, and application domains—a novel capability. It introduces a unified evaluation paradigm encompassing both author attribution and mixed-author identification. The framework integrates handcrafted features, representations from pre-trained code models (CodeBERT and GraphCodeBERT), and lightweight LLM feature distillation, augmented with multi-granularity code representations and rigorous data quality control. Extensive experiments demonstrate state-of-the-art performance across diverse multi-lingual, multi-generator, and multi-domain benchmarks. Crucially, the framework exhibits robust generalization to previously unseen languages, newly deployed code generators, and real-world educational-code corpora—establishing a new benchmark for machine-generated code detection.
📝 Abstract
Large language models (LLMs) have revolutionized code generation, automating programming with remarkable efficiency. However, these advancements challenge programming skills, ethics, and assessment integrity, making the detection of LLM-generated code essential for maintaining accountability and standards. While, there has been some research on this problem, it generally lacks domain coverage and robustness, and only covers a small number of programming languages. To this end, we propose a framework capable of distinguishing between human- and LLM-written code across multiple programming languages, code generators, and domains. We use a large-scale dataset from renowned platforms and LLM-based code generators, alongside applying rigorous data quality checks, feature engineering, and comparative analysis using evaluation of traditional machine learning models, pre-trained language models (PLMs), and LLMs for code detection. We perform an evaluation on out-of-domain scenarios, such as detecting the authorship and hybrid authorship of generated code and generalizing to unseen models, domains, and programming languages. Moreover, our extensive experiments show that our framework effectively distinguishes human- from LLM-written code and sets a new benchmark for this task.