Sensory robustness through top-down feedback and neural stochasticity in recurrent vision models

📅 2025-08-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the synergistic interaction between top-down feedback and neural stochasticity in recurrent visual models and their collective contribution to perceptual robustness. To address image classification, we propose a convolutional recurrent neural network (ConvRNN) augmented with dropout, explicitly incorporating both top-down feedback pathways and stochastic neural dynamics. Theoretically and empirically, we demonstrate that top-down feedback constrains internal representations to low-dimensional manifolds, while stochasticity mitigates unit co-adaptation—jointly forming a dual regularization mechanism that substantially enhances generalization and robustness. Our model achieves superior performance under diverse perturbations, including additive noise, adversarial attacks, and out-of-distribution inputs. Moreover, it improves the speed–accuracy trade-off during inference. These findings uncover a critical computational role of biologically inspired top-down architecture in artificial vision systems, highlighting its functional significance beyond mere biological plausibility.

Technology Category

Application Category

📝 Abstract
Biological systems leverage top-down feedback for visual processing, yet most artificial vision models succeed in image classification using purely feedforward or recurrent architectures, calling into question the functional significance of descending cortical pathways. Here, we trained convolutional recurrent neural networks (ConvRNN) on image classification in the presence or absence of top-down feedback projections to elucidate the specific computational contributions of those feedback pathways. We found that ConvRNNs with top-down feedback exhibited remarkable speed-accuracy trade-off and robustness to noise perturbations and adversarial attacks, but only when they were trained with stochastic neural variability, simulated by randomly silencing single units via dropout. By performing detailed analyses to identify the reasons for such benefits, we observed that feedback information substantially shaped the representational geometry of the post-integration layer, combining the bottom-up and top-down streams, and this effect was amplified by dropout. Moreover, feedback signals coupled with dropout optimally constrained network activity onto a low-dimensional manifold and encoded object information more efficiently in out-of-distribution regimes, with top-down information stabilizing the representational dynamics at the population level. Together, these findings uncover a dual mechanism for resilient sensory coding. On the one hand, neural stochasticity prevents unit-level co-adaptation albeit at the cost of more chaotic dynamics. On the other hand, top-down feedback harnesses high-level information to stabilize network activity on compact low-dimensional manifolds.
Problem

Research questions and friction points this paper is trying to address.

Investigates role of top-down feedback in vision models
Explores impact of neural stochasticity on model robustness
Analyzes feedback's effect on representational geometry dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

ConvRNNs with top-down feedback enhance robustness
Stochastic neural variability via dropout improves performance
Feedback stabilizes activity on low-dimensional manifolds
🔎 Similar Papers
No similar papers found.