🤖 AI Summary
AI-generated text detection suffers from poor cross-domain and cross-model generalization. Method: This paper proposes a lightweight detection approach based on modeling token-level perplexity distributions using large language models (LLMs). It introduces a perplexity-aware attention weighting mechanism that dynamically modulates hidden states and positional encodings according to token-level prediction difficulty, and integrates distributional features—including entropy, perplexity, and top-k confidence—while abandoning conventional mean pooling in favor of hidden-state caching to reduce training overhead. The method employs only a lightweight fully connected network, with parameters amounting to merely a fraction (1/10–1/30) of state-of-the-art fine-tuned models. Contribution/Results: Evaluated across nine languages, it achieves a macro-F1 score of 81.46%, significantly improving zero-shot transferability, multilingual robustness, and adversarial stability. Moreover, it yields more interpretable decision boundaries and superior generalization performance.
📝 Abstract
The rapid advancement in large language models (LLMs) has significantly enhanced their ability to generate coherent and contextually relevant text, raising concerns about the misuse of AI-generated content and making it critical to detect it. However, the task remains challenging, particularly in unseen domains or with unfamiliar LLMs. Leveraging LLM next-token distribution outputs offers a theoretically appealing approach for detection, as they encapsulate insights from the models' extensive pre-training on diverse corpora. Despite its promise, zero-shot methods that attempt to operationalize these outputs have met with limited success. We hypothesize that one of the problems is that they use the mean to aggregate next-token distribution metrics across tokens, when some tokens are naturally easier or harder to predict and should be weighted differently. Based on this idea, we propose the Perplexity Attention Weighted Network (PAWN), which uses the last hidden states of the LLM and positions to weight the sum of a series of features based on metrics from the next-token distribution across the sequence length. Although not zero-shot, our method allows us to cache the last hidden states and next-token distribution metrics on disk, greatly reducing the training resource requirements. PAWN shows competitive and even better performance in-distribution than the strongest baselines (fine-tuned LMs) with a fraction of their trainable parameters. Our model also generalizes better to unseen domains and source models, with smaller variability in the decision boundary across distribution shifts. It is also more robust to adversarial attacks, and if the backbone has multilingual capabilities, it presents decent generalization to languages not seen during supervised training, with LLaMA3-1B reaching a mean macro-averaged F1 score of 81.46% in cross-validation with nine languages.