🤖 AI Summary
This work addresses the limitations of existing proxy alignment methods for black-box, zero-shot large language model (LLM) text detection, which often rely on supervised fine-tuning or frequent API calls, resulting in high costs and insufficient robustness. To overcome these challenges, the authors propose $k$NNProxy, a novel framework that introduces $k$-nearest neighbor language models ($k$NN-LMs) into proxy alignment. By constructing a lightweight datastore and interpolating nearest-neighbor evidence with the outputs of a fixed proxy model, $k$NNProxy achieves efficient alignment without requiring any training or per-token queries. Furthermore, the paper presents a Mixture-of-Proxies (MoP) architecture tailored for domain transfer. Experimental results demonstrate that the proposed approach significantly improves detection performance across various black-box and cross-domain settings while maintaining low query cost, high efficiency, and strong robustness.
📝 Abstract
LLM-generated text (LGT) detection is essential for reliable forensic analysis and for mitigating LLM misuse. Existing LGT detectors can generally be categorized into two broad classes: learning-based approaches and zero-shot methods. Compared with learning-based detectors, zero-shot methods are particularly promising because they eliminate the need to train task-specific classifiers. However, the reliability of zero-shot methods fundamentally relies on the assumption that an off-the-shelf proxy LLM is well aligned with the often unknown source LLM, a premise that rarely holds in real-world black-box scenarios. To address this discrepancy, existing proxy alignment methods typically rely on supervised fine-tuning of the proxy or repeated interactions with commercial APIs, thereby increasing deployment costs, exposing detectors to silent API changes, and limiting robustness under domain shift. Motivated by these limitations, we propose the $k$-nearest neighbor proxy ($k$NNProxy), a training-free and query-efficient proxy alignment framework that repurposes the $k$NN language model ($k$NN-LM) retrieval mechanism as a domain adapter for a fixed proxy LLM. Specifically, a lightweight datastore is constructed once from a target-reflective LGT corpus, either via fixed-budget querying or from existing datasets. During inference, nearest-neighbor evidence induces a token-level predictive distribution that is interpolated with the proxy output, yielding an aligned prediction without proxy fine-tuning or per-token API outputs. To improve robustness under domain shift, we extend $k$NNProxy into a mixture of proxies (MoP) that routes each input to a domain-specific datastore for domain-consistent retrieval. Extensive experiments demonstrate strong detection performance of our method.