Boosting Domain Incremental Learning: Selecting the Optimal Parameters is All You Need

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing parameter-isolation-based domain incremental learning (PIDIL) methods suffer from inaccurate domain-parameter selection under distribution shifts in dynamic, multi-domain, and multi-class scenarios, leading to degraded deep neural network performance. Method: We propose a lightweight, efficient, and scalable PIDIL framework featuring three core components: (i) a novel Gaussian Mixture Compressor (GMC) to explicitly model domain-wise feature distributions; (ii) a Domain Feature Resampler (DFR) to mitigate sample bias; and (iii) a Multi-level Domain Feature Fusion Network (MDFN) enabling fine-grained parameter isolation. The framework integrates Gaussian mixture modeling, feature resampling, and parameter-efficient fine-tuning (PEFT). Contribution/Results: Our method achieves significant improvements over state-of-the-art approaches across six benchmarks spanning image classification, object detection, and speech enhancement—demonstrating strong cross-task and multi-domain continual adaptability with superior generalization.

Technology Category

Application Category

📝 Abstract
Deep neural networks (DNNs) often underperform in real-world, dynamic settings where data distributions change over time. Domain Incremental Learning (DIL) offers a solution by enabling continual model adaptation, with Parameter-Isolation DIL (PIDIL) emerging as a promising paradigm to reduce knowledge conflicts. However, existing PIDIL methods struggle with parameter selection accuracy, especially as the number of domains and corresponding classes grows. To address this, we propose SOYO, a lightweight framework that improves domain selection in PIDIL. SOYO introduces a Gaussian Mixture Compressor (GMC) and Domain Feature Resampler (DFR) to store and balance prior domain data efficiently, while a Multi-level Domain Feature Fusion Network (MDFN) enhances domain feature extraction. Our framework supports multiple Parameter-Efficient Fine-Tuning (PEFT) methods and is validated across tasks such as image classification, object detection, and speech enhancement. Experimental results on six benchmarks demonstrate SOYO's consistent superiority over existing baselines, showcasing its robustness and adaptability in complex, evolving environments. The codes will be released in https://github.com/qwangcv/SOYO.
Problem

Research questions and friction points this paper is trying to address.

Improving parameter selection accuracy in domain incremental learning
Enhancing domain feature extraction and data balancing
Boosting performance in dynamic, multi-domain environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gaussian Mixture Compressor for data storage
Domain Feature Resampler balances prior data
Multi-level Domain Feature Fusion Network
🔎 Similar Papers
No similar papers found.