π€ AI Summary
This work proposes InherNet, an asymmetric low-rank inheritance network architecture that departs from conventional knowledge distillation paradigms by directly inheriting backbone knowledge from the teacher network. Traditional student networks often struggle to fully capture the teacherβs performance due to limited capacity; InherNet addresses this limitation by leveraging singular value decomposition (SVD) for weight initialization and employing asymmetric low-rank decomposition to reconstruct the network. This approach simultaneously preserves model compactness while balancing depth, width, and compression efficiency. Experimental results demonstrate that InherNet significantly outperforms existing student architectures under comparable parameter budgets across both single-modal and multimodal tasks.
π Abstract
Knowledge Distillation (KD) has emerged as a powerful technique for model compression, enabling lightweight student networks to benefit from the performance of redundant teacher networks. However, the inherent capacity gap often limits the performance of student networks. Inspired by the expressiveness of pretrained teacher networks, a compelling research question arises: is there a type of network that can not only inherit the teacher's structure but also maximize the inheritance of its knowledge? Furthermore, how does the performance of such an inheriting network compare to that of student networks, all benefiting from the same teacher network? To further explore this question, we propose InherNet, a neural network inheritance method that performs asymmetric low-rank decomposition on the teacher's weights and reconstructs a lightweight yet expressive network without significant architectural disruption. By leveraging Singular Value Decomposition (SVD) for initialization to ensure the inheritance of principal knowledge, InherNet effectively balances depth, width, and compression efficiency. Experimental results across unimodal and multimodal tasks demonstrate that InherNet achieves higher performance compared to student networks of similar parameter sizes. Our findings reveal a promising direction for future research in efficient model compression beyond traditional distillation.