🤖 AI Summary
To address the limitations of deep convolutional neural networks—including restricted feature composition capability, slow convergence, and poor interpretability—this paper proposes brain-inspired Hierarchical Residual Networks (HiResNets). HiResNets are the first to incorporate cross-hierarchical direct connectivity mechanisms observed between cortical and subcortical regions in the mammalian brain into the residual learning framework. Specifically, they introduce multi-scale, long-range跨-layer residual connections within deep CNNs, integrated with hierarchical feature compression and reconstruction modules, thereby overcoming the representational bottlenecks inherent in conventional same-layer residual connections. Experimental results demonstrate that HiResNets significantly improve image classification accuracy and accelerate model convergence across multiple ResNet variants. Interpretability analysis further confirms their capacity for hierarchical semantic composition. Collectively, HiResNets establish a novel paradigm for designing biologically plausible deep learning architectures with enhanced functional and mechanistic fidelity to neurobiological principles.
📝 Abstract
We present Hierarchical Residual Networks (HiResNets), deep convolutional neural networks with long-range residual connections between layers at different hierarchical levels. HiResNets draw inspiration on the organization of the mammalian brain by replicating the direct connections from subcortical areas to the entire cortical hierarchy. We show that the inclusion of hierarchical residuals in several architectures, including ResNets, results in a boost in accuracy and faster learning. A detailed analysis of our models reveals that they perform hierarchical compositionality by learning feature maps relative to the compressed representations provided by the skip connections.