PhantomHunter: Detecting Unseen Privately-Tuned LLM-Generated Text via Family-Aware Learning

📅 2025-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Text generated by privately fine-tuned large language models (e.g., customized variants of LLaMA, Gemma, or Mistral) evades detection by existing watermarking- or fingerprinting-based detectors due to their divergence from public model signatures. Method: We propose a family-aware learning framework that models shared, coarse-grained “family-level” features across base models and their private variants—bypassing reliance on instance-specific model fingerprints—and enables zero-shot generalization to unseen fine-tuned models. Our approach jointly employs contrastive learning and hierarchical representation distillation to disentangle family-invariant features, augmented by multi-base-model co-training and distributionally robust optimization to enhance cross-family generalization. Contribution/Results: Evaluated on three major LLM families, our method achieves >96% F1-score, significantly outperforming seven academic baselines and three industrial detection services, establishing the first zero-shot, family-aware detector for private LLM outputs.

Technology Category

Application Category

📝 Abstract
With the popularity of large language models (LLMs), undesirable societal problems like misinformation production and academic misconduct have been more severe, making LLM-generated text detection now of unprecedented importance. Although existing methods have made remarkable progress, a new challenge posed by text from privately tuned LLMs remains underexplored. Users could easily possess private LLMs by fine-tuning an open-source one with private corpora, resulting in a significant performance drop of existing detectors in practice. To address this issue, we propose PhantomHunter, an LLM-generated text detector specialized for detecting text from unseen, privately-tuned LLMs. Its family-aware learning framework captures family-level traits shared across the base models and their derivatives, instead of memorizing individual characteristics. Experiments on data from LLaMA, Gemma, and Mistral families show its superiority over 7 baselines and 3 industrial services, with F1 scores of over 96%.
Problem

Research questions and friction points this paper is trying to address.

Detect text from unseen privately-tuned LLMs
Address performance drop in existing detectors
Capture family-level traits across base models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Detects unseen privately-tuned LLM-generated text
Uses family-aware learning framework
Captures family-level traits across models
🔎 Similar Papers