🤖 AI Summary
This work addresses the insufficient safety alignment and systemic vulnerabilities of fully multimodal large language models in cross-modal interactions. The authors propose AdvBench-Omni, a novel dataset grounded in the modality–semantic disentanglement principle, to expose safety flaws under multimodal inputs. They identify, for the first time, the phenomenon of intermediate-layer refusal dissolution and a modality-invariant pure refusal direction, from which a “golden refusal vector” is extracted via singular value decomposition. Building on these insights, they design OmniSteer, a lightweight adapter that enables adaptive control over intervention strength. Experiments demonstrate that this approach boosts refusal success rates from 69.9% to 91.2% while fully preserving the model’s general capabilities across all modalities.
📝 Abstract
Omni-modal Large Language Models (OLLMs) greatly expand LLMs'multimodal capabilities but also introduce cross-modal safety risks. However, a systematic understanding of vulnerabilities in omni-modal interactions remains lacking. To bridge this gap, we establish a modality-semantics decoupling principle and construct the AdvBench-Omni dataset, which reveals a significant vulnerability in OLLMs. Mechanistic analysis uncovers a Mid-layer Dissolution phenomenon driven by refusal vector magnitude shrinkage, alongside the existence of a modal-invariant pure refusal direction. Inspired by these insights, we extract a golden refusal vector using Singular Value Decomposition and propose OmniSteer, which utilizes lightweight adapters to modulate intervention intensity adaptively. Extensive experiments show that our method not only increases the Refusal Success Rate against harmful inputs from 69.9% to 91.2%, but also effectively preserves the general capabilities across all modalities. Our code is available at: https://github.com/zhrli324/omni-safety-research.