GuardTrace-VL: Detecting Unsafe Multimodel Reasoning via Iterative Safety Supervision

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large reasoning models (MLRMs) frequently generate unsafe intermediate reasoning chains in vision-language tasks—exhibiting biases, erroneous visual interpretations, and other hazardous content—yet existing safety mechanisms only guard inputs and final outputs, neglecting the reasoning process itself. Method: We propose GuardTrace, the first safety auditing framework specifically designed for multimodal reasoning chains. It comprises: (1) a novel, diverse, vision-language safety-annotated dataset covering intermediate reasoning steps; (2) a three-stage progressive training paradigm enabling fine-grained, context-aware, and tiered risk identification; and (3) a joint vision-language analysis pipeline integrated with LLM-human collaborative verification. Contribution/Results: Evaluated on cross-domain test sets, GuardTrace achieves an F1 score of 93.1%, surpassing the state-of-the-art by 13.5%. It is the first framework to enable real-time, end-to-end safety monitoring across the entire multimodal reasoning process.

Technology Category

Application Category

📝 Abstract
Multimodal large reasoning models (MLRMs) are increasingly deployed for vision-language tasks that produce explicit intermediate rationales. However, reasoning traces can contain unsafe content even when the final answer is non-harmful, creating deployment risks. Existing multimodal safety guards primarily evaluate only the input question and the final answer, neglecting the intermediate reasoning process. This oversight allows undetected harm, such as biased inferences or policy-violating use of visual context, to emerge during reasoning. We introduce GuardTrace-VL, a vision-aware safety auditor that monitors the full Question-Thinking-Answer (QTA) pipeline via joint image-text analysis, enabling detection of unsafe content as it emerges in the reasoning stage. To support training and evaluation, we construct the GuardTrace dataset, which is generated through diverse prompting strategies and refined via a MLRM- and human-based voting and verification pipeline. Furthermore, we propose a three-stage progressive training scheme combined with the data refinement process, enabling the model to learn nuanced and context-dependent safety preferences according to different risk levels. On our proposed test set covering both in-domain and out-of-domain scenarios, GuardTrace-VL model achieves an F1 score of 93.1% on unsafe reasoning detection tasks, representing a 13.5% improvement in F1 score compared to the previous strongest multimodal safety defense methods. The codes will be made publicly available.
Problem

Research questions and friction points this paper is trying to address.

Detecting unsafe content in multimodal reasoning traces before final answers
Addressing safety oversight in intermediate reasoning processes of vision-language models
Identifying biased inferences and policy violations during multimodal reasoning stages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Monitors full reasoning pipeline via joint image-text analysis
Uses progressive training with data refinement for safety
Achieves high detection accuracy on unsafe reasoning tasks
🔎 Similar Papers
No similar papers found.
Y
Yuxiao Xiang
School of Cyber Science and Technology, University of Science and Technology of China
J
Junchi Chen
School of Cyber Science and Technology, University of Science and Technology of China
Zhenchao Jin
Zhenchao Jin
USTC > HKU
computer visionmachine learninginformation security
Changtao Miao
Changtao Miao
University of Science and Technology of China
AI
Haojie Yuan
Haojie Yuan
Individual Researcher
Qi Chu
Qi Chu
University of Science and Technology of China
Computer visionArtificial intelligence security
T
Tao Gong
School of Cyber Science and Technology, University of Science and Technology of China
Nenghai Yu
Nenghai Yu
University of Science and Technology of China
Computer VisionArtificial IntelligenceInformation Hiding