🤖 AI Summary
This work addresses the operational safety deficiency of large language models (LLMs)—specifically, their inability to reliably detect and refuse off-topic user requests that deviate from intended task boundaries—by formally introducing the concept of *operational safety*. We propose OffTopicEval, the first benchmark designed for real-world, task-oriented operational safety evaluation. Comprehensive assessment of leading open- and closed-source LLMs reveals uniformly low operational safety accuracy (<80%), indicating substantial off-topic response risk. To mitigate this, we introduce a prompt engineering framework grounded in query anchoring (Q-ground) and system prompt anchoring (P-ground), which significantly enhances refusal capability, yielding up to a 41% improvement in safety accuracy. This work bridges a critical gap in LLM safety evaluation by establishing a formal definition, providing a practical benchmark, and delivering an effective, deployable optimization methodology for task-constrained LLM deployment.
📝 Abstract
Large Language Model (LLM) safety is one of the most pressing challenges for enabling wide-scale deployment. While most studies and global discussions focus on generic harms, such as models assisting users in harming themselves or others, enterprises face a more fundamental concern: whether LLM-based agents are safe for their intended use case. To address this, we introduce operational safety, defined as an LLM's ability to appropriately accept or refuse user queries when tasked with a specific purpose. We further propose OffTopicEval, an evaluation suite and benchmark for measuring operational safety both in general and within specific agentic use cases. Our evaluations on six model families comprising 20 open-weight LLMs reveal that while performance varies across models, all of them remain highly operationally unsafe. Even the strongest models -- Qwen-3 (235B) with 77.77% and Mistral (24B) with 79.96% -- fall far short of reliable operational safety, while GPT models plateau in the 62--73% range, Phi achieves only mid-level scores (48--70%), and Gemma and Llama-3 collapse to 39.53% and 23.84%, respectively. While operational safety is a core model alignment issue, to suppress these failures, we propose prompt-based steering methods: query grounding (Q-ground) and system-prompt grounding (P-ground), which substantially improve OOD refusal. Q-ground provides consistent gains of up to 23%, while P-ground delivers even larger boosts, raising Llama-3.3 (70B) by 41% and Qwen-3 (30B) by 27%. These results highlight both the urgent need for operational safety interventions and the promise of prompt-based steering as a first step toward more reliable LLM-based agents.