๐ค AI Summary
This work addresses the limited reliability of language models in out-of-distribution (OOD) text detection by proposing AP-OOD, a novel method that replaces conventional average pooling with attention-based pooling to dynamically weight token embeddings and produce more discriminative OOD scores. AP-OOD introduces a flexible semi-supervised framework that seamlessly interpolates between unsupervised and supervised settings and effectively leverages a small amount of auxiliary anomalous data. Experimental results demonstrate substantial improvements in detection performance: on the XSUM summarization task, the false positive rate at 95% true positive rate (FPR95) drops from 27.84% to 4.67%, and on the WMT15 EnglishโFrench translation task, it decreases from 77.08% to 70.37%, establishing new state-of-the-art results in textual OOD detection.
๐ Abstract
Out-of-distribution (OOD) detection, which maps high-dimensional data into a scalar OOD score, is critical for the reliable deployment of machine learning models. A key challenge in recent research is how to effectively leverage and aggregate token embeddings from language models to obtain the OOD score. In this work, we propose AP-OOD, a novel OOD detection method for natural language that goes beyond simple average-based aggregation by exploiting token-level information. AP-OOD is a semi-supervised approach that flexibly interpolates between unsupervised and supervised settings, enabling the use of limited auxiliary outlier data. Empirically, AP-OOD sets a new state of the art in OOD detection for text: in the unsupervised setting, it reduces the FPR95 (false positive rate at 95% true positives) from 27.84% to 4.67% on XSUM summarization, and from 77.08% to 70.37% on WMT15 En-Fr translation.