SGM: Safety Glasses for Multimodal Large Language Models via Neuron-Level Detoxification

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) inherit harmful, biased, and NSFW content from pretraining corpora, rendering them vulnerable to adversarial triggers; existing training-free, black-box detoxification methods exhibit limited efficacy. This paper proposes a white-box, neuron-level intervention framework: (1) the novel “Safety Goggles” mechanism, which selectively suppresses cross-modal harmful neuron activations via expert-weighted soft inhibition—requiring no parameter updates; (2) MM-TOXIC-QA, the first multimodal toxicity evaluation benchmark tailored for MLLMs; and (3) support for composable defense integration (SGM*). Evaluated on open-source MLLMs, our approach reduces harmful generation rates from 48.2% to 2.5%, while preserving linguistic fluency and multimodal reasoning capabilities. It significantly enhances adversarial robustness without compromising model functionality.

Technology Category

Application Category

📝 Abstract
Disclaimer: Samples in this paper may be harmful and cause discomfort. Multimodal large language models (MLLMs) enable multimodal generation but inherit toxic, biased, and NSFW signals from weakly curated pretraining corpora, causing safety risks, especially under adversarial triggers that late, opaque training-free detoxification methods struggle to handle. We propose SGM, a white-box neuron-level multimodal intervention that acts like safety glasses for toxic neurons: it selectively recalibrates a small set of toxic expert neurons via expertise-weighted soft suppression, neutralizing harmful cross-modal activations without any parameter updates. We establish MM-TOXIC-QA, a multimodal toxicity evaluation framework, and compare SGM with existing detoxification techniques. Experiments on open-source MLLMs show that SGM mitigates toxicity in standard and adversarial conditions, cutting harmful rates from 48.2% to 2.5% while preserving fluency and multimodal reasoning. SGM is extensible, and its combined defenses, denoted as SGM*, integrate with existing detoxification methods for stronger safety performance, providing an interpretable, low-cost solution for toxicity-controlled multimodal generation.
Problem

Research questions and friction points this paper is trying to address.

Detoxifies multimodal large language models from toxic content
Addresses safety risks in adversarial conditions without retraining
Preserves model fluency and reasoning while reducing harmful outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selectively recalibrates toxic expert neurons via soft suppression
Neutralizes harmful cross-modal activations without parameter updates
Integrates with existing methods for stronger safety performance
🔎 Similar Papers
H
Hongbo Wang
Graduate School of Information Science and Technology, The University of Tokyo, Japan
M
MaungMaung AprilPyone
Information and Society Research Division, National Institute of Informatics, Japan
Isao Echizen
Isao Echizen
National Institute of Informatics / University of Tokyo / SOKENDAI
Multimedia securityMultimedia forensicsAI SecurityBiometricsPrivacy