🤖 AI Summary
This work addresses the susceptibility of existing metric-based machine-generated text detectors to stochasticity in generation, which introduces bias in token-level scores. The study presents the first systematic analysis of the sources of such bias and proposes a lightweight calibration mechanism grounded in Markov random fields. By modeling the relationship between local contextual similarity and initial generation instability, the method performs context-aware correction of detection scores. It seamlessly integrates into existing detectors without requiring additional training and demonstrates significantly enhanced robustness across diverse large language models and under paraphrasing attacks. Notably, the approach incurs negligible computational overhead, making it practical for real-world deployment.
📝 Abstract
While machine-generated texts (MGTs) offer great convenience, they also pose risks such as disinformation and phishing, highlighting the need for reliable detection. Metric-based methods, which extract statistically distinguishable features of MGTs, are often more practical than complex model-based methods that are prone to overfitting. Given their diverse designs, we first place representative metric-based methods within a unified framework, enabling a clear assessment of their advantages and limitations. Our analysis identifies a core challenge across these methods: the token-level detection score is easily biased by the inherent randomness of the MGTs generation process. To address this, we theoretically and empirically reveal two relationships of context detection scores that may aid calibration: Neighbor Similarity and Initial Instability. We then propose a Markov-informed score calibration strategy that models these relationships using Markov random fields, and implements it as a lightweight component via a mean-field approximation, allowing our method to be seamlessly integrated into existing detectors. Extensive experiments in various real-world scenarios, such as cross-LLM and paraphrasing attacks, demonstrate significant gains over baselines with negligible computational overhead. The code is available at https://github.com/tmlr-group/MRF_Calibration.