🤖 AI Summary
This work addresses the challenge of diagnosing and mitigating elusive unsafe behavior pathways in large vision-language models. The authors propose a novel approach that integrates causal mediation analysis with a dual-modality safety subspace. Specifically, they first employ causal discovery to identify critical neurons and layers responsible for unsafe outputs, then construct a cross-modal safety subspace into which activations are dynamically projected during inference to suppress harmful features while preserving semantic fidelity. This method achieves the first precise diagnosis and general-purpose repair of unsafe channels in vision-language models, significantly outperforming existing activation steering and alignment techniques across multiple safety benchmarks. It demonstrates strong generalization in defense, favorable transferability, and maintains uncompromised multimodal performance.
📝 Abstract
Large Vision-Language Models (LVLMs) have achieved impressive performance across multimodal understanding and reasoning tasks, yet their internal safety mechanisms remain opaque and poorly controlled. In this work, we present a comprehensive framework for diagnosing and repairing unsafe channels within LVLMs (CARE). We first perform causal mediation analysis to identify neurons and layers that are causally responsible for unsafe behaviors. Based on these findings, we introduce a dual-modal safety subspace projection method that learns generalized safety subspaces for both visual and textual modalities through generalized eigen-decomposition between benign and malicious activations. During inference, activations are dynamically projected toward these safety subspaces via a hybrid fusion mechanism that adaptively balances visual and textual corrections, effectively suppressing unsafe features while preserving semantic fidelity. Extensive experiments on multiple safety benchmarks demonstrate that our causal-subspace repair framework significantly enhances safety robustness without degrading general multimodal capabilities, outperforming prior activation steering and alignment-based baselines. Additionally, our method exhibits good transferability, defending against unseen attacks.