Explaining Generalization of AI-Generated Text Detectors Through Linguistic Analysis

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited generalization of current AI-generated text detectors under cross-prompt, cross-model, and cross-domain settings, where failure mechanisms remain poorly understood. The authors construct a comprehensive benchmark encompassing six prompting strategies, seven large language models, and four domain-specific datasets to systematically evaluate detector robustness. For the first time, they establish a quantitative link between distributional shifts in an 80-dimensional linguistic feature space and detection accuracy. Their analysis reveals that variations in key linguistic attributes—such as tense usage and pronoun frequency—significantly degrade generalization performance. These findings provide both interpretable theoretical insights and empirical evidence to guide the development of more robust detection systems.

Technology Category

Application Category

📝 Abstract
AI-text detectors achieve high accuracy on in-domain benchmarks, but often struggle to generalize across different generation conditions such as unseen prompts, model families, or domains. While prior work has reported these generalization gaps, there are limited insights about the underlying causes. In this work, we present a systematic study aimed at explaining generalization behavior through linguistic analysis. We construct a comprehensive benchmark that spans 6 prompting strategies, 7 large language models (LLMs), and 4 domain datasets, resulting in a diverse set of human- and AI-generated texts. Using this dataset, we fine-tune classification-based detectors on various generation settings and evaluate their cross-prompt, cross-model, and cross-dataset generalization. To explain the performance variance, we compute correlations between generalization accuracies and feature shifts of 80 linguistic features between training and test conditions. Our analysis reveals that generalization performance for specific detectors and evaluation conditions is significantly associated with linguistic features such as tense usage and pronoun frequency.
Problem

Research questions and friction points this paper is trying to address.

generalization
AI-generated text detection
linguistic analysis
feature shift
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

linguistic analysis
generalization
AI-generated text detection
feature shift
large language models
🔎 Similar Papers
No similar papers found.
Y
Yuxi Xia
Faculty of Computer Science, University of Vienna, Vienna, Austria
K
Kinga Sta'nczak
Department of Language Science and Technology, Saarland University, Saarbrücken, Germany
Benjamin Roth
Benjamin Roth
University of Vienna
Natural Language ProcessingMachine Learning