SafeRBench: A Comprehensive Benchmark for Safety Assessment in Large Reasoning Models

๐Ÿ“… 2025-11-19
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
While explicit chain-of-thought (CoT) reasoning in Large Reasoning Models (LRMs) improves answer quality, it introduces novel safety risksโ€”malicious content can be stealthily injected, progressively revealed, or rationalized through misleading reasoning steps. Existing safety evaluations focus primarily on final outputs and thus fail to capture dynamic hazards emerging during intermediate reasoning stages. Method: We propose the first end-to-end safety benchmark for LRMs, innovatively integrating input-risk gradient design with semantically coherent micro-thought chunking to enable fine-grained tracking of progressively harmful content within reasoning chains. We further construct a ten-dimensional safety annotation schema and employ an LLM-based evaluation framework validated by human annotators to ensure assessment reliability. Results: Experiments across 19 state-of-the-art LRMs reveal stage-specific risk distributions and heterogeneous defense capabilities throughout the reasoning process, establishing a reproducible, scalable evaluation paradigm for LRM safety research.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Reasoning Models (LRMs) improve answer quality through explicit chain-of-thought, yet this very capability introduces new safety risks: harmful content can be subtly injected, surface gradually, or be justified by misleading rationales within the reasoning trace. Existing safety evaluations, however, primarily focus on output-level judgments and rarely capture these dynamic risks along the reasoning process. In this paper, we present SafeRBench, the first benchmark that assesses LRM safety end-to-end -- from inputs and intermediate reasoning to final outputs. (1) Input Characterization: We pioneer the incorporation of risk categories and levels into input design, explicitly accounting for affected groups and severity, and thereby establish a balanced prompt suite reflecting diverse harm gradients. (2) Fine-Grained Output Analysis: We introduce a micro-thought chunking mechanism to segment long reasoning traces into semantically coherent units, enabling fine-grained evaluation across ten safety dimensions. (3) Human Safety Alignment: We validate LLM-based evaluations against human annotations specifically designed to capture safety judgments. Evaluations on 19 LRMs demonstrate that SafeRBench enables detailed, multidimensional safety assessment, offering insights into risks and protective mechanisms from multiple perspectives.
Problem

Research questions and friction points this paper is trying to address.

Assessing safety risks in Large Reasoning Models' reasoning processes
Developing fine-grained evaluation across ten safety dimensions
Validating automated safety assessments against human judgments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incorporates risk categories and levels into input design
Segments reasoning traces into coherent units for evaluation
Validates LLM-based evaluations against human safety annotations
๐Ÿ”Ž Similar Papers