NESSiE: The Necessary Safety Benchmark -- Identifying Errors that should not Exist

📅 2026-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the persistent failure of current large language models (LLMs) in low-complexity security tasks, revealing significant risks in their deployment as autonomous agents. To tackle this issue, the paper introduces the concept of “necessary safety” and presents NESSiE—the first lightweight benchmark specifically designed to evaluate LLMs in scenarios where errors should be avoidable. NESSiE employs minimal, human-crafted test cases to assess fundamental capabilities in information and access security. The study further proposes the Safe & Helpful (SH) metric to quantify the trade-off between safety and usefulness, conducting non-adversarial evaluations across mainstream LLMs. Results show that even state-of-the-art models fail to achieve 100% accuracy on NESSiE, with performance notably degrading in distracting contexts, thereby underscoring critical gaps in their safety guarantees.

Technology Category

Application Category

📝 Abstract
We introduce NESSiE, the NEceSsary SafEty benchmark for large language models (LLMs). With minimal test cases of information and access security, NESSiE reveals safety-relevant failures that should not exist, given the low complexity of the tasks. NESSiE is intended as a lightweight, easy-to-use sanity check for language model safety and, as such, is not sufficient for guaranteeing safety in general -- but we argue that passing this test is necessary for any deployment. However, even state-of-the-art LLMs do not reach 100% on NESSiE and thus fail our necessary condition of language model safety, even in the absence of adversarial attacks. Our Safe & Helpful (SH) metric allows for direct comparison of the two requirements, showing models are biased toward being helpful rather than safe. We further find that disabled reasoning for some models, but especially a benign distraction context degrade model performance. Overall, our results underscore the critical risks of deploying such models as autonomous agents in the wild. We make the dataset, package and plotting code publicly available.
Problem

Research questions and friction points this paper is trying to address.

large language models
safety benchmark
necessary safety
information security
autonomous agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

safety benchmark
large language models
necessary condition
Safe & Helpful metric
minimal test cases
🔎 Similar Papers
No similar papers found.