🤖 AI Summary
This paper addresses the problem of detecting unattributed text generated by large language models (LLMs). We propose a general supervised detection method based on the Transformer architecture. Our approach introduces selected-token probability sequences as interpretable input features to explicitly model the probabilistic distribution patterns inherent in LLM-generated text. To enhance cross-domain generalization—particularly in low-resource or zero-label settings—we adopt a two-stage paradigm: contrastive pretraining on unlabeled data, followed by supervised fine-tuning. Extensive evaluation across three public benchmarks and 24 heterogeneous text domains demonstrates that our method significantly outperforms state-of-the-art baselines. Notably, it achieves an average accuracy improvement of 6.2% in cross-domain detection scenarios. These results validate the effectiveness and robustness of our joint design—integrating probability-sequence modeling with contrastive pretraining—for reliable LLM-generated text detection.
📝 Abstract
LLMs are becoming increasingly capable and widespread. Consequently, the potential and reality of their misuse is also growing. In this work, we address the problem of detecting LLM-generated text that is not explicitly declared as such. We present a novel, general-purpose, and supervised LLM text detector, SElected-Next-Token tRAnsformer (SENTRA). SENTRA is a Transformer-based encoder leveraging selected-next-token-probability sequences and utilizing contrastive pre-training on large amounts of unlabeled data. Our experiments on three popular public datasets across 24 domains of text demonstrate SENTRA is a general-purpose classifier that significantly outperforms popular baselines in the out-of-domain setting.