🤖 AI Summary
To address high-latency safety auditing and training-inference mismatch caused by full-response detection in real-time LLM serving, this paper proposes a streaming content monitoring framework enabling token-level harmfulness detection and early termination during generation. Methodologically: (1) we introduce FineHarm, the first dataset natively supporting partial-output safety annotation; (2) we design a streaming monitor trained with dual-granularity supervision—token-level and response-level; (3) we propose a novel pseudo-labeling–driven safety alignment paradigm, replacing DPO. Experiments show the model achieves macro-F1 ≥ 0.95 and terminates harmful generations after observing only ~18% of output tokens on average. As a pseudo-labeler, it yields significantly more accurate harm scores than DPO baselines, enabling efficient, low-overhead safety enforcement without compromising generation quality or latency.
📝 Abstract
Though safety alignment has been applied to most large language models (LLMs), LLM service providers generally deploy a subsequent moderation as the external safety guardrail in real-world products. Existing moderators mainly practice a conventional full detection, which determines the harmfulness based on the complete LLM output, causing high service latency. Recent works pay more attention to partial detection where moderators oversee the generation midway and early stop the output if harmfulness is detected, but they directly apply moderators trained with the full detection paradigm to incomplete outputs, introducing a training-inference gap that lowers the performance. In this paper, we explore how to form a data-and-model solution that natively supports partial detection. For the data, we construct FineHarm, a dataset consisting of 29K prompt-response pairs with fine-grained annotations to provide reasonable supervision for token-level training. Then, we propose the streaming content monitor, which is trained with dual supervision of response- and token-level labels and can follow the output stream of LLM to make a timely judgment of harmfulness. Experiments show that SCM gains 0.95+ in macro F1 score that is comparable to full detection, by only seeing the first 18% of tokens in responses on average. Moreover, the SCM can serve as a pseudo-harmfulness annotator for improving safety alignment and lead to a higher harmlessness score than DPO.