Detecting Machine-Generated Texts: Not Just "AI vs Humans" and Explainability is Complicated

📅 2024-06-26
🏛️ arXiv.org
📈 Citations: 9
Influential: 1
📄 PDF
🤖 AI Summary
Existing LLM-generated text detectors are limited to binary human/AI classification and lack interpretability. To address this, we propose a novel ternary detection paradigm centered on explainability, introducing an “inconclusive” class to robustly handle boundary cases. Methodologically, we construct the first ternary benchmark dataset with human-annotated explanations—spanning multiple LLMs and human authors—and adapt state-of-the-art detectors (RoBERTa, DeBERTa, LogProbs) into a unified ternary classification framework. We further conduct attribution consistency analysis and human-AI explanation comparison. Our key contributions are: (1) empirical validation that the “inconclusive” class significantly improves result reliability and user comprehensibility; (2) discovery that current detectors exhibit substantially lower explanation consistency than humans; and (3) actionable design guidelines and an empirical benchmark for explainable AI-text detection systems.

Technology Category

Application Category

📝 Abstract
As LLMs rapidly advance, increasing concerns arise regarding risks about actual authorship of texts we see online and in real world. The task of distinguishing LLM-authored texts is complicated by the nuanced and overlapping behaviors of both machines and humans. In this paper, we challenge the current practice of considering LLM-generated text detection a binary classification task of differentiating human from AI. Instead, we introduce a novel ternary text classification scheme, adding an"undecided"category for texts that could be attributed to either source, and we show that this new category is crucial to understand how to make the detection result more explainable to lay users. This research shifts the paradigm from merely classifying to explaining machine-generated texts, emphasizing need for detectors to provide clear and understandable explanations to users. Our study involves creating four new datasets comprised of texts from various LLMs and human authors. Based on new datasets, we performed binary classification tests to ascertain the most effective SOTA detection methods and identified SOTA LLMs capable of producing harder-to-detect texts. We constructed a new dataset of texts generated by two top-performing LLMs and human authors, and asked three human annotators to produce ternary labels with explanation notes. This dataset was used to investigate how three top-performing SOTA detectors behave in new ternary classification context. Our results highlight why"undecided"category is much needed from the viewpoint of explainability. Additionally, we conducted an analysis of explainability of the three best-performing detectors and the explanation notes of the human annotators, revealing insights about the complexity of explainable detection of machine-generated texts. Finally, we propose guidelines for developing future detection systems with improved explanatory power.
Problem

Research questions and friction points this paper is trying to address.

Challenges in distinguishing LLM-generated texts from human-written ones
Need for explainable detection beyond binary AI vs human classification
Proposing ternary classification with 'undecided' category for clearer explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ternary classification with undecided category
Explainable detection using human annotations
Guidelines for future explainable detection systems
🔎 Similar Papers
No similar papers found.
J
Jiazhou Ji
School of Cyber Science and Engineering, Shanghai Jiao Tong University, China
R
Ruizhe Li
Department of Computing Science, University of Aberdeen, UK
S
Shujun Li
Institute of Cyber Security for Society (iCSS) & School of Computing, University of Kent, UK
J
Jie Guo
School of Cyber Science and Engineering, Shanghai Jiao Tong University, China
W
Weidong Qiu
School of Cyber Science and Engineering, Shanghai Jiao Tong University, China
Zheng Huang
Zheng Huang
NORTH DAKOTA STATE UNIVERSITY
Human Computer Interaction
C
Chiyu Chen
School of Cyber Science and Engineering, Shanghai Jiao Tong University, China
Xiaoyu Jiang
Xiaoyu Jiang
Associate Professor (Research), Beihang University
Deep learningIndustrial IntelligenceAI security
X
Xinru Lu
School of Cyber Science and Engineering, Shanghai Jiao Tong University, China