🤖 AI Summary
This work investigates the impact of topographic constraints on representation learning in neural networks, specifically comparing weight similarity (WS) and activation similarity (AS) as two spatial regularization strategies within topological convolutional neural networks. We conduct end-to-end training and systematic ablation experiments to evaluate their effects on classification accuracy, robustness against adversarial perturbations and input degradation, and functional spatial organization—including orientation tuning, receptive field localization, activation variance, and neuronal clustering. Results demonstrate that WS regularization substantially outperforms AS: it maintains high classification accuracy while markedly improving robustness to both weight perturbations and input noise, and further enhances spatial clustering of functional units and specificity of neural responses. To our knowledge, this is the first study to establish WS as a superior topographic constraint paradigm, critically enabling the joint emergence of biologically plausible representational structure and strong generalization performance.
📝 Abstract
Topographic neural networks are computational models that can simulate the spatial and functional organization of the brain. Topographic constraints in neural networks can be implemented in multiple ways, with potentially different impacts on the representations learned by the network. The impact of such different implementations has not been systematically examined. To this end, here we compare topographic convolutional neural networks trained with two spatial constraints: Weight Similarity (WS), which pushes neighboring units to develop similar incoming weights, and Activation Similarity (AS), which enforces similarity in unit activations. We evaluate the resulting models on classification accuracy, robustness to weight perturbations and input degradation, and the spatial organization of learned representations. Compared to both AS and standard CNNs, WS provided three main advantages: i) improved robustness to noise, also showing higher accuracy under weight corruption; ii) greater input sensitivity, reflected in higher activation variance; and iii) stronger functional localization, with units showing similar activations positioned at closer distances. In addition, WS produced differences in orientation tuning, symmetry sensitivity, and eccentricity profiles of units, indicating an influence of this spatial constraint on the representational geometry of the network. Our findings suggest that during end-to-end training, WS constraints produce more robust representations than AS or non-topographic CNNs. These findings also suggest that weight-based spatial constraints can shape feature learning and functional organization in biophysical inspired models.