SENTRA: Selected-Next-Token Transformer for LLM Text Detection

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the problem of detecting unattributed text generated by large language models (LLMs). We propose a general supervised detection method based on the Transformer architecture. Our approach introduces selected-token probability sequences as interpretable input features to explicitly model the probabilistic distribution patterns inherent in LLM-generated text. To enhance cross-domain generalization—particularly in low-resource or zero-label settings—we adopt a two-stage paradigm: contrastive pretraining on unlabeled data, followed by supervised fine-tuning. Extensive evaluation across three public benchmarks and 24 heterogeneous text domains demonstrates that our method significantly outperforms state-of-the-art baselines. Notably, it achieves an average accuracy improvement of 6.2% in cross-domain detection scenarios. These results validate the effectiveness and robustness of our joint design—integrating probability-sequence modeling with contrastive pretraining—for reliable LLM-generated text detection.

Technology Category

Application Category

📝 Abstract
LLMs are becoming increasingly capable and widespread. Consequently, the potential and reality of their misuse is also growing. In this work, we address the problem of detecting LLM-generated text that is not explicitly declared as such. We present a novel, general-purpose, and supervised LLM text detector, SElected-Next-Token tRAnsformer (SENTRA). SENTRA is a Transformer-based encoder leveraging selected-next-token-probability sequences and utilizing contrastive pre-training on large amounts of unlabeled data. Our experiments on three popular public datasets across 24 domains of text demonstrate SENTRA is a general-purpose classifier that significantly outperforms popular baselines in the out-of-domain setting.
Problem

Research questions and friction points this paper is trying to address.

Detecting undisclosed LLM-generated text content
Developing general-purpose supervised detection model
Improving out-of-domain classification performance significantly
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer-based encoder for text detection
Leverages selected-next-token-probability sequences
Utilizes contrastive pre-training on unlabeled data
🔎 Similar Papers