Defensive M2S: Training Guardrail Models on Compressed Multi-turn Conversations

📅 2026-01-01
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational overhead of safety guardrail models in large language model deployment, which stems from processing full multi-turn dialogue histories. To mitigate this, the authors propose Defensive M2S, a novel training paradigm that compresses multi-turn conversations into single-turn formats (M2S) for guardrail model fine-tuning, substantially reducing both training and inference costs. The approach integrates three compression templates—hyphenize, numberize, and pythonize—achieving a complexity reduction from O(n²) to O(n) across models including LlamaGuard, Nemotron, and Qwen3Guard. The optimal configuration, combining Qwen3Guard with the hyphenize template, attains a 93.8% attack detection recall on SafeDialBench while cutting inference tokens by 94.6%, outperforming the baseline by 38.9 percentage points.

Technology Category

Application Category

📝 Abstract
Guardrail models are essential for ensuring the safety of Large Language Model (LLM) deployments, but processing full multi-turn conversation histories incurs significant computational cost. We propose Defensive M2S, a training paradigm that fine-tunes guardrail models on Multi-turn to Single-turn (M2S) compressed conversations rather than complete dialogue histories. We provide a formal complexity analysis showing that M2S reduces training cost from $O(n^2)$ to $O(n)$ for $n$-turn conversations. Empirically, on our training dataset (779 samples, avg. 10.6 turns), M2S requires only 169K tokens compared to 15.7M tokens for the multi-turn baseline -- a 93$\times$ reduction. We evaluate Defensive M2S across three guardrail model families (LlamaGuard, Nemotron, Qwen3Guard) and three compression templates (hyphenize, numberize, pythonize) on SafeDialBench, a comprehensive multi-turn jailbreak benchmark. Our best configuration, Qwen3Guard with hyphenize compression, achieves 93.8% attack detection recall while reducing inference tokens by 94.6% (from 3,231 to 173 tokens per conversation). This represents a 38.9 percentage point improvement over the baseline while dramatically reducing both training and inference costs. Our findings demonstrate that M2S compression can serve as an effective efficiency technique for guardrail deployment, enabling scalable safety screening of long multi-turn conversations.
Problem

Research questions and friction points this paper is trying to address.

guardrail models
multi-turn conversations
computational cost
LLM safety
efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Defensive M2S
Guardrail Models
M2S Compression
Efficient Safety Screening
Multi-turn Dialogue Compression
🔎 Similar Papers
No similar papers found.