🤖 AI Summary
This work addresses the real-time detection of high-risk interactions—textual inputs/outputs that may cause severe harm—in large language models (LLMs). Methodologically, it introduces a lightweight activation probe, the first systematic application of activation probing to high-risk scenario identification. It establishes a resource-aware, hierarchical monitoring paradigm: an efficient initial screening layer employs linear or nonlinear binary classifiers trained on intermediate-layer neural activations of the LLM, with low-cost training enabled by synthetic data. The key contributions are: (1) strong cross-distribution generalization on real-world data; (2) detection performance competitive with medium-scale fine-tuned or prompt-engineered monitors; and (3) up to six orders-of-magnitude reduction in inference overhead. This enables significantly more feasible and scalable safe deployment of LLMs in production environments.
📝 Abstract
Monitoring is an important aspect of safely deploying Large Language Models (LLMs). This paper examines activation probes for detecting"high-stakes"interactions -- where the text indicates that the interaction might lead to significant harm -- as a critical, yet underexplored, target for such monitoring. We evaluate several probe architectures trained on synthetic data, and find them to exhibit robust generalization to diverse, out-of-distribution, real-world data. Probes' performance is comparable to that of prompted or finetuned medium-sized LLM monitors, while offering computational savings of six orders-of-magnitude. Our experiments also highlight the potential of building resource-aware hierarchical monitoring systems, where probes serve as an efficient initial filter and flag cases for more expensive downstream analysis. We release our novel synthetic dataset and codebase to encourage further study.