HKT: A Biologically Inspired Framework for Modular Hereditary Knowledge Transfer in Neural Networks

📅 2025-08-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the performance limitations of compact neural networks without increasing depth or parameter count, this paper proposes HKT, a biologically inspired structured knowledge transfer framework. HKT employs a three-stage “extract–transfer–mix” pipeline to modularly and selectively migrate task-relevant features from large teacher models to lightweight student models. It introduces Genetic Attention (GA), a novel mechanism that aligns and adaptively fuses source-domain knowledge with the student’s native representations. Operating at the neural module level, HKT preserves model compactness and inference efficiency while significantly improving accuracy. Experiments across diverse vision tasks—including optical flow estimation (Sintel, KITTI), image classification (CIFAR-10), and semantic segmentation (LiTS)—demonstrate consistent superiority over conventional knowledge distillation methods. The framework is particularly effective in resource-constrained deployment scenarios, offering a scalable and biologically grounded approach to efficient model adaptation.

Technology Category

Application Category

📝 Abstract
A prevailing trend in neural network research suggests that model performance improves with increasing depth and capacity - often at the cost of integrability and efficiency. In this paper, we propose a strategy to optimize small, deployable models by enhancing their capabilities through structured knowledge inheritance. We introduce Hereditary Knowledge Transfer (HKT), a biologically inspired framework for modular and selective transfer of task-relevant features from a larger, pretrained parent network to a smaller child model. Unlike standard knowledge distillation, which enforces uniform imitation of teacher outputs, HKT draws inspiration from biological inheritance mechanisms - such as memory RNA transfer in planarians - to guide a multi-stage process of feature transfer. Neural network blocks are treated as functional carriers, and knowledge is transmitted through three biologically motivated components: Extraction, Transfer, and Mixture (ETM). A novel Genetic Attention (GA) mechanism governs the integration of inherited and native representations, ensuring both alignment and selectivity. We evaluate HKT across diverse vision tasks, including optical flow (Sintel, KITTI), image classification (CIFAR-10), and semantic segmentation (LiTS), demonstrating that it significantly improves child model performance while preserving its compactness. The results show that HKT consistently outperforms conventional distillation approaches, offering a general-purpose, interpretable, and scalable solution for deploying high-performance neural networks in resource-constrained environments.
Problem

Research questions and friction points this paper is trying to address.

Optimizing small models via structured knowledge inheritance
Transferring task-relevant features from large to small networks
Improving performance while preserving model compactness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Biologically inspired modular knowledge transfer
Genetic Attention for selective feature integration
Multi-stage Extraction, Transfer, Mixture process
🔎 Similar Papers
No similar papers found.
Y
Yanick Chistian Tchenko
University Paris Saclay, 9 Rue Joliot Curie, 91190 Gif-sur-Yvette, France
Felix Mohr
Felix Mohr
Universidad de La Sabana - Colombia
Automated Machine LearningLearning CurvesCausality
H
Hicham Hadj Abdelkader
University Paris Saclay, 9 Rue Joliot Curie, 91190 Gif-sur-Yvette, France
Hedi Tabia
Hedi Tabia
Professor of Computer Science, Université d'Evry Val d'Essonne, Université Paris Saclay
Computer VisionComputer GraphicsMachine Learning