How does information access affect LLM monitors'ability to detect sabotage?

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the unclear relationship between the amount of input information and the effectiveness of large language models (LLMs) in detecting alignment risks—such as sabotage behaviors—during monitoring. It reveals a “less-is-more” effect in automated supervision: reducing input information can paradoxically enhance detection performance. To leverage this insight, the authors propose Extract-and-Evaluate (EaE), a hierarchical monitoring framework that decouples information extraction from risk evaluation to optimize the detection pipeline. Evaluated on the BigCodeBench-Sabotage benchmark, EaE improves sabotage detection rates by 16.8 percentage points without increasing false positives. It consistently matches or outperforms baseline methods across diverse settings, and when integrated with information filtering strategies from Gemini and Claude, it achieves Pareto-optimal trade-offs between cost and performance.

Technology Category

Application Category

📝 Abstract
Frontier language model agents can exhibit misaligned behaviors, including deception, exploiting reward hacks, and pursuing hidden objectives. To control potentially misaligned agents, we can use LLMs themselves to monitor for misbehavior. In this paper, we study how information access affects LLM monitor performance. While one might expect that monitors perform better when they have access to more of the monitored agents'reasoning and actions, we find that contemporary systems often perform better with less information, a phenomenon we call the less-is-more effect for automated oversight. We demonstrate this phenomenon, and analyze the conditions under which it occurs, in three evaluation environments where agents must conduct sabotage while evading monitors. Leveraging this effect, we introduce extract-and-evaluate (EaE) monitoring -- a new hierarchical approach where one monitor isolates relevant excerpts from the monitored agent's trajectory and another monitor scores these excerpts. In BigCodeBench-Sabotage with a GPT-4.1-mini monitor, EaE improves sabotage detection rates by 16.8 percentage points over the next-best approach. In other settings, EaE either outperforms or is competitive with baselines. In addition, we find that agents unaware of being monitored can be caught much more easily and that monitors scoring longer reasoning traces gain more from information filtering. Lastly, we conduct a cost-performance analysis and find that Gemini and Claude models with monitoring techniques that involve information filtering occupy much of the Pareto frontier.
Problem

Research questions and friction points this paper is trying to address.

information access
LLM monitoring
sabotage detection
less-is-more effect
misaligned agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

less-is-more effect
extract-and-evaluate (EaE)
LLM monitoring
sabotage detection
information filtering
🔎 Similar Papers
No similar papers found.
R
Rauno Arike
Aether Research, Vector Institute, Trajectory Labs
R
Raja Mehta Moreno
Imperial College London
Rohan Subramani
Rohan Subramani
University of Toronto
AI safetyLLM agents
S
Shubhorup Biswas
Aether Research
Francis Rhys Ward
Francis Rhys Ward
Imperial College London
AI alignmentdeceptionsafety evaluations