AdaDetectGPT: Adaptive Detection of LLM-Generated Text with Statistical Guarantees

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of detecting text generated by large language models (LLMs). We propose AdaDetectGPT, a novel detection framework that enhances logit-based statistical detection via a learnable, adaptive witness function. Crucially, it provides the first finite-sample statistical guarantees on both true positive rate (TPR) and false positive rate (FPR)—a theoretical advance absent in prior methods. The approach integrates probabilistic distribution modeling, adaptive machine learning, and controlled error-rate design. Evaluated across diverse LLMs—including LLaMA, ChatGLM, and GPT variants—and multiple benchmark datasets, AdaDetectGPT consistently outperforms state-of-the-art baselines, achieving up to a 58% absolute improvement in detection accuracy. Extensive experiments demonstrate strong generalization across unseen models and domains, as well as practical deployability. By unifying statistical rigor with empirical effectiveness, AdaDetectGPT establishes a new paradigm for trustworthy LLM-generated content identification.

Technology Category

Application Category

📝 Abstract
We study the problem of determining whether a piece of text has been authored by a human or by a large language model (LLM). Existing state of the art logits-based detectors make use of statistics derived from the log-probability of the observed text evaluated using the distribution function of a given source LLM. However, relying solely on log probabilities can be sub-optimal. In response, we introduce AdaDetectGPT -- a novel classifier that adaptively learns a witness function from training data to enhance the performance of logits-based detectors. We provide statistical guarantees on its true positive rate, false positive rate, true negative rate and false negative rate. Extensive numerical studies show AdaDetectGPT nearly uniformly improves the state-of-the-art method in various combination of datasets and LLMs, and the improvement can reach up to 58%. A python implementation of our method is available at https://github.com/Mamba413/AdaDetectGPT.
Problem

Research questions and friction points this paper is trying to address.

Detecting whether text is human-written or LLM-generated with statistical guarantees
Improving logits-based detectors by adaptively learning witness functions
Enhancing detection accuracy across various datasets and language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptively learns witness function from training data
Enhances logits-based detection with statistical guarantees
Improves state-of-the-art method up to 58% performance
🔎 Similar Papers
No similar papers found.
Hongyi Zhou
Hongyi Zhou
Karlsruhe Institute of Technology
reinforcement learningimitation learningrobotics
J
Jin Zhu
School of Mathematics, University of Birmingham, Birmingham, UK
P
Pingfan Su
Department of Statistics, LSE, London, UK
K
Kai Ye
Department of Statistics, LSE, London, UK
Y
Ying Yang
Department of Mathematics, Tsinghua University, Beijing, China
S
Shakeel A O B Gavioli-Akilagun
Department of Decision Analytics and Operations, City University Hong Kong, Hongkong, China
Chengchun Shi
Chengchun Shi
London School of Economics and Political Science
Large Language ModelsReinforcement LearningStatistics