Beyond Content Safety: Real-Time Monitoring for Reasoning Vulnerabilities in Large Language Models

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical gap in ensuring reasoning safety within large language models during complex inference. It formally defines “reasoning safety” for the first time, establishes a taxonomy encompassing nine categories of unsafe reasoning behaviors, and introduces the first real-time, step-level safety monitoring mechanism. Built upon an external large language model architecture, the approach integrates prompt engineering with simulated adversarial attacks—such as reasoning hijacking and denial-of-service—to detect unsafe reasoning steps. Evaluated on a test set of 450 reasoning chains, the method achieves 84.88% accuracy in localizing unsafe steps and 85.37% accuracy in classifying error types, substantially outperforming existing baselines including hallucination detectors and process reward models.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) increasingly rely on explicit chain-of-thought (CoT) reasoning to solve complex tasks, yet the safety of the reasoning process itself remains largely unaddressed. Existing work on LLM safety focuses on content safety--detecting harmful, biased, or factually incorrect outputs -- and treats the reasoning chain as an opaque intermediate artifact. We identify reasoning safety as an orthogonal and equally critical security dimension: the requirement that a model's reasoning trajectory be logically consistent, computationally efficient, and resistant to adversarial manipulation. We make three contributions. First, we formally define reasoning safety and introduce a nine-category taxonomy of unsafe reasoning behaviors, covering input parsing errors, reasoning execution errors, and process management errors. Second, we conduct a large-scale prevalence study annotating 4111 reasoning chains from both natural reasoning benchmarks and four adversarial attack methods (reasoning hijacking and denial-of-service), confirming that all nine error types occur in practice and that each attack induces a mechanistically interpretable signature. Third, we propose a Reasoning Safety Monitor: an external LLM-based component that runs in parallel with the target model, inspects each reasoning step in real time via a taxonomy-embedded prompt, and dispatches an interrupt signal upon detecting unsafe behavior. Evaluation on a 450-chain static benchmark shows that our monitor achieves up to 84.88\% step-level localization accuracy and 85.37\% error-type classification accuracy, outperforming hallucination detectors and process reward model baselines by substantial margins. These results demonstrate that reasoning-level monitoring is both necessary and practically achievable, and establish reasoning safety as a foundational concern for the secure deployment of large reasoning models.
Problem

Research questions and friction points this paper is trying to address.

reasoning safety
large language models
chain-of-thought reasoning
adversarial manipulation
reasoning vulnerabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

reasoning safety
chain-of-thought monitoring
adversarial reasoning attacks
real-time LLM oversight
reasoning vulnerability taxonomy
🔎 Similar Papers
No similar papers found.