🤖 AI Summary
Pretrained vision-language models (VLMs) retain redundant domain-specific information, leading to high computational overhead and privacy leakage risks. Existing unlearning methods focus on class-level erasure and cannot support selective forgetting of specific visual domains (e.g., illustrations), a practical requirement in real-world applications. To address this, we propose Approximate Domain Unlearning (ADU), a novel task that aims to significantly degrade model accuracy on the target domain (e.g., illustrations) while strictly preserving performance on non-target domains (e.g., photographs). We introduce a domain-decoupled fine-tuning framework, leveraging an instance-aware domain separation mechanism to disentangle entangled domain distributions, enabling fine-grained and controllable domain-level knowledge removal. Extensive experiments across multiple benchmarks demonstrate that our method substantially outperforms existing baselines, achieving superior selectivity in forgetting and strong cross-domain robustness.
📝 Abstract
Pre-trained Vision-Language Models (VLMs) exhibit strong generalization capabilities, enabling them to recognize a wide range of objects across diverse domains without additional training. However, they often retain irrelevant information beyond the requirements of specific downstream tasks, raising concerns about computational efficiency and potential information leakage. This has motivated growing interest in approximate unlearning, which aims to selectively remove unnecessary knowledge while preserving overall model performance. Existing approaches to approximate unlearning have primarily focused on class unlearning, where a VLM is retrained to fail to recognize specified object classes while maintaining accuracy for others. However, merely forgetting object classes is often insufficient in practical applications. For instance, an autonomous driving system should accurately recognize real cars while avoiding misrecognition of illustrated cars depicted in roadside advertisements as real cars, which could be hazardous. In this paper, we introduce Approximate Domain Unlearning (ADU), a novel problem setting that requires reducing recognition accuracy for images from specified domains (e.g., illustration) while preserving accuracy for other domains (e.g., real). ADU presents new technical challenges: due to the strong domain generalization capability of pre-trained VLMs, domain distributions are highly entangled in the feature space, making naive approaches based on penalizing target domains ineffective. To tackle this limitation, we propose a novel approach that explicitly disentangles domain distributions and adaptively captures instance-specific domain information. Extensive experiments show that our approach outperforms baselines built upon VLM tuning techniques, paving the way for practical and fine-grained unlearning in VLMs. Code: https://kodaikawamura.github.io/Domain_Unlearning/.