Dynamic Evaluation for Oversensitivity in LLMs

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently exhibit “over-sensitivity”—defensively refusing harmless inputs—degrading user experience and obscuring safety boundaries. Static evaluation benchmarks suffer from data contamination and poor generalization, limiting their reliability. Method: We propose OVERBENCH, the first dynamic evaluation paradigm: it leverages model-generated feedback to iteratively synthesize test cases, enabling real-time tracking of sensitivity boundary evolution; integrates behavior-aligned sampling and cross-model diversification to construct a model-specific benchmark covering 25 mainstream LLMs and 450K samples. Contribution/Results: Our experiments provide the first systematic empirical evidence of pervasive over-sensitivity across leading LLMs. The dynamic framework significantly outperforms static baselines in detecting data contamination, behavioral drift, and assessing robustness—demonstrating superior adaptability, fidelity, and diagnostic capability for safety evaluation.

Technology Category

Application Category

📝 Abstract
Oversensitivity occurs when language models defensively reject prompts that are actually benign. This behavior not only disrupts user interactions but also obscures the boundary between harmful and harmless content. Existing benchmarks rely on static datasets that degrade overtime as models evolve, leading to data contamination and diminished evaluative power. To address this, we develop a framework that dynamically generates model-specific challenging datasets, capturing emerging defensive patterns and aligning with each model's unique behavior. Building on this approach, we construct OVERBENCH, a benchmark that aggregates these datasets across diverse LLM families, encompassing 450,000 samples from 25 models. OVERBENCH provides a dynamic and evolving perspective on oversensitivity, allowing for continuous monitoring of defensive triggers as models advance, highlighting vulnerabilities that static datasets overlook.
Problem

Research questions and friction points this paper is trying to address.

Addresses language models defensively rejecting benign prompts
Overcomes static dataset limitations with dynamic evaluation framework
Identifies oversensitivity vulnerabilities missed by existing benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic framework generates model-specific challenging datasets
OVERBENCH benchmark aggregates datasets across diverse LLM families
Continuous monitoring captures emerging defensive patterns in models