Residual Feature Integration is Sufficient to Prevent Negative Transfer

๐Ÿ“… 2025-05-17
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In transfer learning, negative transfer often arises when pre-trained source-domain features exhibit distributional misalignment with the target domain. To address this, we propose Residual Feature Integration (REFINE): a method that freezes the source-domain representation while jointly optimizing a lightweight target-domain encoder and a shallow fusion networkโ€”thereby enabling domain adaptation without compromising source knowledge retention. Theoretically, we establish the first rigorous guarantee that residual integration strictly prevents negative transfer and derive a tight generalization error bound. Methodologically, REFINE is architecture-agnostic, modality-agnostic, and requires no fine-tuning of the source model. Extensive experiments across vision, natural language, and tabular domains demonstrate that REFINE consistently outperforms state-of-the-art baselines in cross-domain transfer tasks, achieving superior robustness and practical deployability.

Technology Category

Application Category

๐Ÿ“ Abstract
Transfer learning typically leverages representations learned from a source domain to improve performance on a target task. A common approach is to extract features from a pre-trained model and directly apply them for target prediction. However, this strategy is prone to negative transfer where the source representation fails to align with the target distribution. In this article, we propose Residual Feature Integration (REFINE), a simple yet effective method designed to mitigate negative transfer. Our approach combines a fixed source-side representation with a trainable target-side encoder and fits a shallow neural network on the resulting joint representation, which adapts to the target domain while preserving transferable knowledge from the source domain. Theoretically, we prove that REFINE is sufficient to prevent negative transfer under mild conditions, and derive the generalization bound demonstrating its theoretical benefit. Empirically, we show that REFINE consistently enhances performance across diverse application and data modalities including vision, text, and tabular data, and outperforms numerous alternative solutions. Our method is lightweight, architecture-agnostic, and robust, making it a valuable addition to the existing transfer learning toolbox.
Problem

Research questions and friction points this paper is trying to address.

Prevents negative transfer in transfer learning
Combines source and target representations effectively
Improves performance across diverse data modalities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Residual Feature Integration prevents negative transfer
Combines fixed source and trainable target encoders
Lightweight, architecture-agnostic, and robust solution
๐Ÿ”Ž Similar Papers
No similar papers found.