HACo-Det: A Study Towards Fine-Grained Machine-Generated Text Detection under Human-AI Coauthoring

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fine-grained detection of AI-generated text segments in human-AI collaborative writing remains underexplored, hindering attribution and accountability. Method: We introduce HACo-Det—the first human-AI collaborative writing dataset with word-level annotations—and propose a fine-grained detection paradigm that adapts document-level detectors to word- and sentence-level attribution. We systematically adapt and fine-tune seven representative detector families (statistical, embedding-based, and LLM-based). Contribution/Results: Our best fine-tuned model achieves a word-level F1 score of 0.682—significantly outperforming conventional heuristic-based metrics (0.462). We further identify fundamental limitations of current approaches in contextual modeling and cross-domain generalization. This work establishes the feasibility of fine-grained AI contribution attribution, providing a new benchmark and methodological foundation for traceable, auditable AI-generated content.

Technology Category

Application Category

📝 Abstract
The misuse of large language models (LLMs) poses potential risks, motivating the development of machine-generated text (MGT) detection. Existing literature primarily concentrates on binary, document-level detection, thereby neglecting texts that are composed jointly by human and LLM contributions. Hence, this paper explores the possibility of fine-grained MGT detection under human-AI coauthoring. We suggest fine-grained detectors can pave pathways toward coauthored text detection with a numeric AI ratio. Specifically, we propose a dataset, HACo-Det, which produces human-AI coauthored texts via an automatic pipeline with word-level attribution labels. We retrofit seven prevailing document-level detectors to generalize them to word-level detection. Then we evaluate these detectors on HACo-Det on both word- and sentence-level detection tasks. Empirical results show that metric-based methods struggle to conduct fine-grained detection with a 0.462 average F1 score, while finetuned models show superior performance and better generalization across domains. However, we argue that fine-grained co-authored text detection is far from solved. We further analyze factors influencing performance, e.g., context window, and highlight the limitations of current methods, pointing to potential avenues for improvement.
Problem

Research questions and friction points this paper is trying to address.

Detecting fine-grained machine-generated text in human-AI coauthored content
Evaluating word-level and sentence-level detection performance of existing methods
Identifying limitations and improvement avenues for coauthored text detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-grained detection with numeric AI ratio
Automatic pipeline for word-level attribution labels
Retrofitting document-level detectors for word-level tasks
🔎 Similar Papers
No similar papers found.