A Unified Framework for Knowledge Transfer in Bidirectional Model Scaling

📅 2026-03-08
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the prevailing limitation in knowledge transfer between models of different sizes, where scaling up (S2L) and scaling down (L2S) are typically treated as incompatible tasks lacking a unified framework. To bridge this gap, we propose BoT, the first size-agnostic bidirectional scaling framework that treats model weights as continuous signals and leverages the discrete wavelet transform (DWT) and its inverse (IDWT) to enable parameter-free, computationally efficient knowledge transfer in both directions. Within this framework, S2L and L2S are naturally modeled as signal upsampling and downsampling, with the wavelet decomposition level serving as a dynamic scaling factor. Evaluated on DeiT, BERT, and GPT architectures, BoT achieves state-of-the-art performance on benchmarks such as GLUE and SQuAD while significantly reducing pretraining FLOPs—by up to 67.1% for S2L and 52.8% for L2S.

Technology Category

Application Category

📝 Abstract
Transferring pre-trained knowledge from a source model to a target model of a different architectural size is a key challenge for flexible and efficient model scaling. However, current parameter-space methods treat Small-to-Large (S2L) and Large-to-Small (L2S) scaling as separate, incompatible problems, focusing on parameter synthesis and selection, respectively. This fragmented perspective has resulted in specialized tools, hindering a unified, bidirectional framework. In this paper, we propose BoT (Bidirectional knowledge Transfer), the first size-agnostic framework to unify S2L and L2S scaling. Our core insight is to treat model weights as continuous signals, where models of different sizes represent distinct discretizations of the transferable knowledge. This multi-resolution perspective directly casts S2L and L2S scaling as the signal processing operations of upsampling and downsampling, naturally leading to the adoption of the Discrete Wavelet Transform (DWT) and its Inverse (IDWT). BoT leverages the recursive nature of wavelets, using the decomposition level as a dynamic scaling factor to bridge disparate model sizes in a parameter-free and computationally efficient manner. Extensive experiments on DeiT, BERT, and GPT demonstrate significant pre-training FLOPs savings (up to 67.1% for S2L, 52.8% for L2S) and state-of-the-art performance on benchmarks like GLUE and SQuAD.
Problem

Research questions and friction points this paper is trying to address.

knowledge transfer
model scaling
bidirectional scaling
parameter-space methods
pre-trained models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bidirectional knowledge transfer
Model scaling
Discrete Wavelet Transform
Parameter-free transfer
Multi-resolution modeling
🔎 Similar Papers
No similar papers found.
J
Jianlu Shen
School of Computer Science and Engineering, Southeast University, Nanjing, China; Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China
F
Fu Feng
School of Computer Science and Engineering, Southeast University, Nanjing, China; Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China
J
Jiaze Xu
School of Computer Science and Engineering, Southeast University, Nanjing, China; Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China
Y
Yucheng Xie
School of Computer Science and Engineering, Southeast University, Nanjing, China; Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China
Jiaqi Lv
Jiaqi Lv
Southeast University
Machine Learning
Xin Geng
Xin Geng
School of Computer Science and Engineering, Southeast University
Artificial IntelligencePattern RecognitionMachine Learning