🤖 AI Summary
Addressing the zero-shot detection of LLM-generated code—without access to the original prompt or the generative model—this paper introduces Approximate Task Conditioning (ATC), a novel paradigm. ATC reconstructs task semantics via reverse prompt engineering and alignment with code summaries, then models token-level probability distributions and entropy under this approximated task condition. Its key contribution is the first identification of a fundamental distinction between code and natural language in the divergence patterns of conditional versus unconditional token distributions—enabling reliable, training-free, annotation-free, prompt-agnostic, and model-agnostic detection. Evaluated on multilingual benchmarks (Python, C++, Java), ATC achieves state-of-the-art performance with strong cross-language robustness. The code and datasets are publicly released to ensure reproducibility.
📝 Abstract
Detecting Large Language Model (LLM)-generated code is a growing challenge with implications for security, intellectual property, and academic integrity. We investigate the role of conditional probability distributions in improving zero-shot LLM-generated code detection, when considering both the code and the corresponding task prompt that generated it. Our key insight is that when evaluating the probability distribution of code tokens using an LLM, there is little difference between LLM-generated and human-written code. However, conditioning on the task reveals notable differences. This contrasts with natural language text, where differences exist even in the unconditional distributions. Leveraging this, we propose a novel zero-shot detection approach that approximates the original task used to generate a given code snippet and then evaluates token-level entropy under the approximated task conditioning (ATC). We further provide a mathematical intuition, contextualizing our method relative to previous approaches. ATC requires neither access to the generator LLM nor the original task prompts, making it practical for real-world applications. To the best of our knowledge, it achieves state-of-the-art results across benchmarks and generalizes across programming languages, including Python, CPP, and Java. Our findings highlight the importance of task-level conditioning for LLM-generated code detection. The supplementary materials and code are available at https://github.com/maorash/ATC, including the dataset gathering implementation, to foster further research in this area.