🤖 AI Summary
This study addresses the problem of online watermark detection in text generated by large language models, aiming to effectively distinguish AI-generated content from human-written text. The authors formulate this task as a sequential independence hypothesis test and propose a unified online detection framework based on e-processes, which allows for optional stopping at any time while maintaining statistical validity. They innovatively design an adaptive empirical e-process construction method that enhances detection power without compromising theoretical guarantees. Experimental results demonstrate that the proposed approach matches or exceeds the performance of existing methods in detection accuracy, while providing rigorous anytime-valid statistical error control.
📝 Abstract
Watermarking for large language models (LLMs) has emerged as an effective tool for distinguishing AI-generated text from human-written content. Statistically, watermark schemes induce dependence between generated tokens and a pseudo-random sequence, reducing watermark detection to a hypothesis testing problem on independence. We develop a unified framework for LLM watermark detection based on e-processes, providing anytime-valid guarantees for online testing. We propose various methods to construct empirically adaptive e-processes that can enhance the detection power. In addition, theoretical results are established to characterize the power properties of the proposed procedures. Some experiments demonstrate that the proposed framework achieves competitive performance compared to existing watermark detection methods.