π€ AI Summary
This work addresses the challenge of detecting LLM-generated text through a zero-shot statistical hypothesis testing framework that reliably distinguishes human-written from machine-generated textβwithout any model-specific training or fine-tuning. The method leverages the convergence behavior of log-perplexity and average entropy/cross-entropy, modeling their deviation under finite-length sequences as an exponentially controllable hypothesis test. Grounded in concentration inequalities and sequence-level stochastic process theory, we rigorously prove that both Type I and Type II error rates decay exponentially with increasing text length. Empirical evaluation across diverse LLMs (e.g., GPT, Llama, Claude) and domains demonstrates high accuracy (>95%) and strong robustness, significantly outperforming existing zero-shot baselines. Our approach provides both theoretically grounded guarantees and a practical, deployable tool for verifiable provenance attribution in misinformation detection.
π Abstract
Verifying the provenance of content is crucial to the function of many organizations, e.g., educational institutions, social media platforms, firms, etc. This problem is becoming increasingly difficult as text generated by Large Language Models (LLMs) becomes almost indistinguishable from human-generated content. In addition, many institutions utilize in-house LLMs and want to ensure that external, non-sanctioned LLMs do not produce content within the institution. In this paper, we answer the following question: Given a piece of text, can we identify whether it was produced by LLM $A$ or $B$ (where $B$ can be a human)? We model LLM-generated text as a sequential stochastic process with complete dependence on history and design zero-shot statistical tests to distinguish between (i) the text generated by two different sets of LLMs $A$ (in-house) and $B$ (non-sanctioned) and also (ii) LLM-generated and human-generated texts. We prove that the type I and type II errors for our tests decrease exponentially in the text length. In designing our tests, we derive concentration inequalities on the difference between log-perplexity and the average entropy of the string under $A$. Specifically, for a given string, we demonstrate that if the string is generated by $A$, the log-perplexity of the string under $A$ converges to the average entropy of the string under $A$, except with an exponentially small probability in string length. We also show that if $B$ generates the text, except with an exponentially small probability in string length, the log-perplexity of the string under $A$ converges to the average cross-entropy of $B$ and $A$. Lastly, we present preliminary experimental results to support our theoretical results. By enabling guaranteed (with high probability) finding of the origin of harmful LLM-generated text with arbitrary size, we can help fight misinformation.