Beyond Raw Detection Scores: Markov-Informed Calibration for Boosting Machine-Generated Text Detection

📅 2026-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the susceptibility of existing metric-based machine-generated text detectors to stochasticity in generation, which introduces bias in token-level scores. The study presents the first systematic analysis of the sources of such bias and proposes a lightweight calibration mechanism grounded in Markov random fields. By modeling the relationship between local contextual similarity and initial generation instability, the method performs context-aware correction of detection scores. It seamlessly integrates into existing detectors without requiring additional training and demonstrates significantly enhanced robustness across diverse large language models and under paraphrasing attacks. Notably, the approach incurs negligible computational overhead, making it practical for real-world deployment.

Technology Category

Application Category

📝 Abstract
While machine-generated texts (MGTs) offer great convenience, they also pose risks such as disinformation and phishing, highlighting the need for reliable detection. Metric-based methods, which extract statistically distinguishable features of MGTs, are often more practical than complex model-based methods that are prone to overfitting. Given their diverse designs, we first place representative metric-based methods within a unified framework, enabling a clear assessment of their advantages and limitations. Our analysis identifies a core challenge across these methods: the token-level detection score is easily biased by the inherent randomness of the MGTs generation process. To address this, we theoretically and empirically reveal two relationships of context detection scores that may aid calibration: Neighbor Similarity and Initial Instability. We then propose a Markov-informed score calibration strategy that models these relationships using Markov random fields, and implements it as a lightweight component via a mean-field approximation, allowing our method to be seamlessly integrated into existing detectors. Extensive experiments in various real-world scenarios, such as cross-LLM and paraphrasing attacks, demonstrate significant gains over baselines with negligible computational overhead. The code is available at https://github.com/tmlr-group/MRF_Calibration.
Problem

Research questions and friction points this paper is trying to address.

machine-generated text detection
detection score bias
metric-based methods
token-level calibration
text generation randomness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Markov random fields
score calibration
machine-generated text detection
neighbor similarity
initial instability
🔎 Similar Papers
No similar papers found.
Chenwang Wu
Chenwang Wu
University of Science and Technology of China
Trustworthy Machine LearningData Mining
Y
Yiu-ming Cheung
Department of Computer Science, Hong Kong Baptist University, Hong Kong, China
Shuhai Zhang
Shuhai Zhang
华南理工大学
Computer VisionMachine Learning
Bo Han
Bo Han
HKBU / RIKEN
Machine LearningDeep LearningArtificial IntelligenceTrustworthy Machine Learning
D
Defu Lian
School of Computer Science, University of Science and Technology of China, Hefei, China