🤖 AI Summary
To address two key challenges in machine unlearning for multimodal large language models (MLLMs)—inconsistent cross-modal forgetting and degradation of general-purpose performance—this paper proposes a selective unlearning method based on cross-modal influential neuron path editing. We introduce modality-specific attribution scores to model hierarchical information flow, enabling identification and coordinated editing of critical cross-modal neuron paths. Furthermore, we design a representation misdirection strategy that removes sensitive knowledge while preserving generalization capability. Experiments demonstrate that our method achieves an 87.75% forgetting rate on multimodal tasks while improving general knowledge retention by 54.26%; on text-only tasks, it attains an 80.65% forgetting rate with 77.9% performance retention—substantially outperforming existing neuron-editing approaches.
📝 Abstract
Multimodal Large Language Models (MLLMs) extend foundation models to real-world applications by integrating inputs such as text and vision. However, their broad knowledge capacity raises growing concerns about privacy leakage, toxicity mitigation, and intellectual property violations. Machine Unlearning (MU) offers a practical solution by selectively forgetting targeted knowledge while preserving overall model utility. When applied to MLLMs, existing neuron-editing-based MU approaches face two fundamental challenges: (1) forgetting becomes inconsistent across modalities because existing point-wise attribution methods fail to capture the structured, layer-by-layer information flow that connects different modalities; and (2) general knowledge performance declines when sensitive neurons that also support important reasoning paths are pruned, as this disrupts the model's ability to generalize. To alleviate these limitations, we propose a multimodal influential neuron path editor (MIP-Editor) for MU. Our approach introduces modality-specific attribution scores to identify influential neuron paths responsible for encoding forget-set knowledge and applies influential-path-aware neuron-editing via representation misdirection. This strategy also enables effective and coordinated forgetting across modalities while preserving the model's general capabilities. Experimental results demonstrate that MIP-Editor achieves a superior unlearning performance on multimodal tasks, with a maximum forgetting rate of 87.75% and up to 54.26% improvement in general knowledge retention. On textual tasks, MIP-Editor achieves up to 80.65% forgetting and preserves 77.9% of general performance. Codes are available at https://github.com/PreckLi/MIP-Editor.