🤖 AI Summary
This work addresses the prevailing limitation in knowledge transfer between models of different sizes, where scaling up (S2L) and scaling down (L2S) are typically treated as incompatible tasks lacking a unified framework. To bridge this gap, we propose BoT, the first size-agnostic bidirectional scaling framework that treats model weights as continuous signals and leverages the discrete wavelet transform (DWT) and its inverse (IDWT) to enable parameter-free, computationally efficient knowledge transfer in both directions. Within this framework, S2L and L2S are naturally modeled as signal upsampling and downsampling, with the wavelet decomposition level serving as a dynamic scaling factor. Evaluated on DeiT, BERT, and GPT architectures, BoT achieves state-of-the-art performance on benchmarks such as GLUE and SQuAD while significantly reducing pretraining FLOPs—by up to 67.1% for S2L and 52.8% for L2S.
📝 Abstract
Transferring pre-trained knowledge from a source model to a target model of a different architectural size is a key challenge for flexible and efficient model scaling. However, current parameter-space methods treat Small-to-Large (S2L) and Large-to-Small (L2S) scaling as separate, incompatible problems, focusing on parameter synthesis and selection, respectively. This fragmented perspective has resulted in specialized tools, hindering a unified, bidirectional framework. In this paper, we propose BoT (Bidirectional knowledge Transfer), the first size-agnostic framework to unify S2L and L2S scaling. Our core insight is to treat model weights as continuous signals, where models of different sizes represent distinct discretizations of the transferable knowledge. This multi-resolution perspective directly casts S2L and L2S scaling as the signal processing operations of upsampling and downsampling, naturally leading to the adoption of the Discrete Wavelet Transform (DWT) and its Inverse (IDWT). BoT leverages the recursive nature of wavelets, using the decomposition level as a dynamic scaling factor to bridge disparate model sizes in a parameter-free and computationally efficient manner. Extensive experiments on DeiT, BERT, and GPT demonstrate significant pre-training FLOPs savings (up to 67.1% for S2L, 52.8% for L2S) and state-of-the-art performance on benchmarks like GLUE and SQuAD.