Neural Gate: Mitigating Privacy Risks in LVLMs via Neuron-Level Gradient Gating

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical vulnerability of current large vision-language models (LVLMs), which lack consistent refusal capabilities when confronted with privacy-related instructions and are thus susceptible to malicious exploitation for leaking sensitive information. Existing defense mechanisms suffer from limited generalization and often degrade model utility. To overcome these limitations, we propose Neural Gate, a novel method that, for the first time, employs neuron-level gradient gating to precisely identify and edit model parameters associated with privacy concepts. By leveraging privacy concept feature vectors for targeted model editing, Neural Gate significantly enhances the generalization of refusal behavior on unseen privacy queries across multiple LVLMs—including MiniGPT and LLaVA—effectively mitigating emerging privacy attacks while preserving near-original performance on standard tasks.

Technology Category

Application Category

📝 Abstract
Large Vision-Language Models (LVLMs) have shown remarkable potential across a wide array of vision-language tasks, leading to their adoption in critical domains such as finance and healthcare. However, their growing deployment also introduces significant security and privacy risks. Malicious actors could potentially exploit these models to extract sensitive information, highlighting a critical vulnerability. Recent studies show that LVLMs often fail to consistently refuse instructions designed to compromise user privacy. While existing work on privacy protection has made meaningful progress in preventing the leakage of sensitive data, they are constrained by limitations in both generalization and non-destructiveness. They often struggle to robustly handle unseen privacy-related queries and may inadvertently degrade a model's performance on standard tasks. To address these challenges, we introduce Neural Gate, a novel method for mitigating privacy risks through neuron-level model editing. Our method improves a model's privacy safeguards by increasing its rate of refusal for privacy-related questions, crucially extending this protective behavior to novel sensitive queries not encountered during the editing process. Neural Gate operates by learning a feature vector to identify neurons associated with privacy-related concepts within the model's representation of a subject. This localization then precisely guides the update of model parameters. Through comprehensive experiments on MiniGPT and LLaVA, we demonstrate that our method significantly boosts the model's privacy protection while preserving its original utility.
Problem

Research questions and friction points this paper is trying to address.

privacy risks
Large Vision-Language Models
privacy leakage
model generalization
non-destructiveness
Innovation

Methods, ideas, or system contributions that make the work stand out.

neuron-level editing
gradient gating
privacy protection
vision-language models
model editing
🔎 Similar Papers
No similar papers found.