Model Recycling Framework for Multi-Source Data-Free Supervised Transfer Learning

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenging multi-source unsupervised transfer learning setting where source data is inaccessible and only partial pre-trained models—rather than full model weights—are available. Method: We propose a parameter-efficient, source-data-free model reuse framework that operates under both white-box and black-box settings. Leveraging parameter selection and model decomposition, it automatically identifies and reuses relevant substructures across multiple source models to enable cross-domain knowledge transfer. Contribution/Results: To our knowledge, this is the first method supporting data-free, partial-model-access transfer in multi-source scenarios. It substantially reduces reliance on source data and complete model weights, and natively supports the Model-as-a-Service (MaaS) paradigm, facilitating scalable model library construction. Extensive experiments across diverse downstream tasks demonstrate performance on par with or superior to source-data-dependent baselines, validating its effectiveness and practicality.

Technology Category

Application Category

📝 Abstract
Increasing concerns for data privacy and other difficulties associated with retrieving source data for model training have created the need for source-free transfer learning, in which one only has access to pre-trained models instead of data from the original source domains. This setting introduces many challenges, as many existing transfer learning methods typically rely on access to source data, which limits their direct applicability to scenarios where source data is unavailable. Further, practical concerns make it more difficult, for instance efficiently selecting models for transfer without information on source data, and transferring without full access to the source models. So motivated, we propose a model recycling framework for parameter-efficient training of models that identifies subsets of related source models to reuse in both white-box and black-box settings. Consequently, our framework makes it possible for Model as a Service (MaaS) providers to build libraries of efficient pre-trained models, thus creating an opportunity for multi-source data-free supervised transfer learning.
Problem

Research questions and friction points this paper is trying to address.

Enables transfer learning without source data access
Selects related source models efficiently for reuse
Supports white-box and black-box model transfer settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model recycling framework for transfer learning
Efficient source model selection without data
Works in white-box and black-box settings
🔎 Similar Papers
No similar papers found.
S
Sijia Wang
Department of Electrical and Computer Engineering, Duke University
Ricardo Henao
Ricardo Henao
Duke University
Machine Learning