🤖 AI Summary
To address the high computational overhead incurred by large reasoning models (LRMs) in detecting harmful queries—stemming from lengthy, stepwise reasoning traces—this paper proposes PSRT (Pre-filled Safe Reasoning Tokens). PSRT introduces learnable virtual safe reasoning tokens and indicator mechanisms to compress multi-step reasoning into a single forward pass, replacing explicit token-by-token generation with continuous embedding-based reasoning and enabling end-to-end training. Evaluated across seven LRMs, thirteen benchmark datasets, and eight categories of jailbreaking attacks, PSRT achieves zero inference latency overhead while incurring only a marginal average F1 drop of 0.015 (<0.2%), preserving detection accuracy nearly intact. Its core contribution lies in being the first work to parameterize the safety reasoning process as learnable embeddings—thereby jointly optimizing efficiency and robustness—and establishing a lightweight, embedding-centric paradigm for LRM safety enforcement.
📝 Abstract
Large Reasoning Models (LRMs) have demonstrated remarkable performance on tasks such as mathematics and code generation. Motivated by these strengths, recent work has empirically demonstrated the effectiveness of LRMs as guard models in improving harmful query detection. However, LRMs typically generate long reasoning traces during inference, causing substantial computational overhead. In this paper, we introduce PSRT, a method that replaces the model's reasoning process with a Prefilled Safe Reasoning Trace, thereby significantly reducing the inference cost of LRMs. Concretely, PSRT prefills "safe reasoning virtual tokens" from a constructed dataset and learns over their continuous embeddings. With the aid of indicator tokens, PSRT enables harmful-query detection in a single forward pass while preserving the classification effectiveness of LRMs. We evaluate PSRT on 7 models, 13 datasets, and 8 jailbreak methods. In terms of efficiency, PSRT completely removes the overhead of generating reasoning tokens during inference. In terms of classification performance, PSRT achieves nearly identical accuracy, with only a minor average F1 drop of 0.015 across 7 models and 5 datasets.