Modality-Aware Neuron Pruning for Unlearning in Multimodal Large Language Models

📅 2025-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) suffer from inter-modal knowledge entanglement, hindering selective forgetting of sensitive information and risking privacy leakage and knowledge degradation. To address this, we propose MANU, a modality-aware neuron pruning framework. MANU introduces the first cross-modal neuron importance quantification method, integrating gradient sensitivity analysis with modality-specific attribution to enable fine-grained, modality-disentangled importance assessment. It then employs a two-stage strategy: (i) identification of cross-modally critical neurons and (ii) modality-customized pruning, followed by lightweight retraining to preserve downstream performance. Evaluated across diverse MLLM architectures, MANU achieves an average 32.7% improvement in forgetting success rate, ensures balanced forgetting across modalities, and retains over 98.4% of original task performance—significantly outperforming existing unimodal forgetting approaches.

Technology Category

Application Category

📝 Abstract
Generative models such as Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs) trained on massive datasets can lead them to memorize and inadvertently reveal sensitive information, raising ethical and privacy concerns. While some prior works have explored this issue in the context of LLMs, it presents a unique challenge for MLLMs due to the entangled nature of knowledge across modalities, making comprehensive unlearning more difficult. To address this challenge, we propose Modality Aware Neuron Unlearning (MANU), a novel unlearning framework for MLLMs designed to selectively clip neurons based on their relative importance to the targeted forget data, curated for different modalities. Specifically, MANU consists of two stages: important neuron selection and selective pruning. The first stage identifies and collects the most influential neurons across modalities relative to the targeted forget knowledge, while the second stage is dedicated to pruning those selected neurons. MANU effectively isolates and removes the neurons that contribute most to the forget data within each modality, while preserving the integrity of retained knowledge. Our experiments conducted across various MLLM architectures illustrate that MANU can achieve a more balanced and comprehensive unlearning in each modality without largely affecting the overall model utility.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Large Language Models
sensitive information memorization
selective neuron pruning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modality Aware Neuron Unlearning
Selective neuron pruning
Multimodal data handling
🔎 Similar Papers
No similar papers found.