Zero-Shot Detection of LLM-Generated Code via Approximated Task Conditioning

📅 2025-06-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the zero-shot detection of LLM-generated code—without access to the original prompt or the generative model—this paper introduces Approximate Task Conditioning (ATC), a novel paradigm. ATC reconstructs task semantics via reverse prompt engineering and alignment with code summaries, then models token-level probability distributions and entropy under this approximated task condition. Its key contribution is the first identification of a fundamental distinction between code and natural language in the divergence patterns of conditional versus unconditional token distributions—enabling reliable, training-free, annotation-free, prompt-agnostic, and model-agnostic detection. Evaluated on multilingual benchmarks (Python, C++, Java), ATC achieves state-of-the-art performance with strong cross-language robustness. The code and datasets are publicly released to ensure reproducibility.

Technology Category

Application Category

📝 Abstract
Detecting Large Language Model (LLM)-generated code is a growing challenge with implications for security, intellectual property, and academic integrity. We investigate the role of conditional probability distributions in improving zero-shot LLM-generated code detection, when considering both the code and the corresponding task prompt that generated it. Our key insight is that when evaluating the probability distribution of code tokens using an LLM, there is little difference between LLM-generated and human-written code. However, conditioning on the task reveals notable differences. This contrasts with natural language text, where differences exist even in the unconditional distributions. Leveraging this, we propose a novel zero-shot detection approach that approximates the original task used to generate a given code snippet and then evaluates token-level entropy under the approximated task conditioning (ATC). We further provide a mathematical intuition, contextualizing our method relative to previous approaches. ATC requires neither access to the generator LLM nor the original task prompts, making it practical for real-world applications. To the best of our knowledge, it achieves state-of-the-art results across benchmarks and generalizes across programming languages, including Python, CPP, and Java. Our findings highlight the importance of task-level conditioning for LLM-generated code detection. The supplementary materials and code are available at https://github.com/maorash/ATC, including the dataset gathering implementation, to foster further research in this area.
Problem

Research questions and friction points this paper is trying to address.

Detecting LLM-generated code without prior examples
Differentiating human-written and LLM-generated code via task conditioning
Achieving state-of-the-art detection across multiple programming languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Approximated Task Conditioning for detection
Token-level entropy evaluation method
No need for generator LLM access
🔎 Similar Papers
No similar papers found.