🤖 AI Summary
This work investigates whether multi-feature fusion significantly improves detection of large language model (LLM)-generated text. Addressing the unverified hypothesis that semantic, syntactic, and statistical features provide complementary signals, we propose MHFD—a multi-hierarchical feature detection framework integrating DeBERTa-based semantic representations, dependency syntactic structures, and n-gram statistical probabilities, coupled with an adaptive fusion strategy. Systematic evaluation across multiple benchmarks reveals that state-of-the-art neural detectors already efficiently capture dominant discriminative cues: multi-feature fusion yields only marginal accuracy gains (0.4–2.6 percentage points), achieving 89.7% in-domain and 84.2% cross-domain accuracy, while increasing computational overhead by 4.2×. To our knowledge, this is the first study to quantitatively characterize the diminishing returns and high cost of multi-feature approaches for LLM-generated text detection—providing empirical grounding and directional guidance for developing lightweight, efficient detection paradigms.
📝 Abstract
With the rapid advancement of large language model technology, there is growing interest in whether multi-feature approaches can significantly improve AI text detection beyond what single neural models achieve. While intuition suggests that combining semantic, syntactic, and statistical features should provide complementary signals, this assumption has not been rigorously tested with modern LLM-generated text. This paper provides a systematic empirical investigation of multi-hierarchical feature integration for AI text detection, specifically testing whether the computational overhead of combining multiple feature types is justified by performance gains. We implement MHFD (Multi-Hierarchical Feature Detection), integrating DeBERTa-based semantic analysis, syntactic parsing, and statistical probability features through adaptive fusion. Our investigation reveals important negative results: despite theoretical expectations, multi-feature integration provides minimal benefits (0.4-0.5% improvement) while incurring substantial computational costs (4.2x overhead), suggesting that modern neural language models may already capture most relevant detection signals efficiently. Experimental results on multiple benchmark datasets demonstrate that the MHFD method achieves 89.7% accuracy in in-domain detection and maintains 84.2% stable performance in cross-domain detection, showing modest improvements of 0.4-2.6% over existing methods.