🤖 AI Summary
Large language models (LLMs) suffer from over-refusal—erroneously rejecting benign queries as harmful—severely undermining their reliability and usability. Existing testing methods are limited by insufficient benchmark coverage and weak test-case generation capability. To address this, we propose ORFuzz, the first evolutionary testing framework specifically designed for over-refusal detection. ORFuzz introduces three key innovations: (1) safety-category-aware seed selection, (2) an LLM-driven adaptive mutation mechanism, and (3) OR-Judge, a robust evaluation model calibrated to human judgment. Extensive experiments across 10 mainstream LLMs show that ORFuzz achieves a mean over-refusal triggering rate of 63.56%, substantially outperforming baseline approaches. Furthermore, the generated ORFuzzSet benchmark comprises 1,855 high-transferability test cases, attaining a mean over-refusal trigger rate of 6.98%—more than double that of existing baselines.
📝 Abstract
Large Language Models (LLMs) increasingly exhibit over-refusal - erroneously rejecting benign queries due to overly conservative safety measures - a critical functional flaw that undermines their reliability and usability. Current methods for testing this behavior are demonstrably inadequate, suffering from flawed benchmarks and limited test generation capabilities, as highlighted by our empirical user study. To the best of our knowledge, this paper introduces the first evolutionary testing framework, ORFuzz, for the systematic detection and analysis of LLM over-refusals. ORFuzz uniquely integrates three core components: (1) safety category-aware seed selection for comprehensive test coverage, (2) adaptive mutator optimization using reasoning LLMs to generate effective test cases, and (3) OR-Judge, a human-aligned judge model validated to accurately reflect user perception of toxicity and refusal. Our extensive evaluations demonstrate that ORFuzz generates diverse, validated over-refusal instances at a rate (6.98% average) more than double that of leading baselines, effectively uncovering vulnerabilities. Furthermore, ORFuzz's outputs form the basis of ORFuzzSet, a new benchmark of 1,855 highly transferable test cases that achieves a superior 63.56% average over-refusal rate across 10 diverse LLMs, significantly outperforming existing datasets. ORFuzz and ORFuzzSet provide a robust automated testing framework and a valuable community resource, paving the way for developing more reliable and trustworthy LLM-based software systems.