CDRL: A Reinforcement Learning Framework Inspired by Cerebellar Circuits and Dendritic Computational Strategies

📅 2026-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reinforcement learning often struggles with low sample efficiency, poor robustness, and weak generalization in high-dimensional, partially observable, and noisy environments. This work proposes a novel reinforcement learning architecture inspired by cerebellar circuitry, systematically incorporating cerebellar structural priors and dendritic computation mechanisms as inductive biases for the first time. By leveraging large-scale expansion, sparse connectivity, sparse activation, and nonlinear dendritic modulation, the resulting agent achieves both biological plausibility and high learning efficiency. Empirical evaluations demonstrate that the proposed method significantly outperforms existing approaches across multiple high-dimensional, noisy benchmark tasks, achieving consistent improvements in sample efficiency, robustness, and generalization—while maintaining superior performance even under strict parameter constraints.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) has achieved notable performance in high-dimensional sequential decision-making tasks, yet remains limited by low sample efficiency, sensitivity to noise, and weak generalization under partial observability. Most existing approaches address these issues primarily through optimization strategies, while the role of architectural priors in shaping representation learning and decision dynamics is less explored. Inspired by structural principles of the cerebellum, we propose a biologically grounded RL architecture that incorporate large expansion, sparse connectivity, sparse activation, and dendritic-level modulation. Experiments on noisy, high-dimensional RL benchmarks show that both the cerebellar architecture and dendritic modulation consistently improve sample efficiency, robustness, and generalization compared to conventional designs. Sensitivity analysis of architectural parameters suggests that cerebellum-inspired structures can offer optimized performance for RL with constrained model parameters. Overall, our work underscores the value of cerebellar structural priors as effective inductive biases for RL.
Problem

Research questions and friction points this paper is trying to address.

sample efficiency
noise sensitivity
generalization
partial observability
reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

cerebellar-inspired architecture
dendritic modulation
sparse connectivity
sample efficiency
inductive bias
🔎 Similar Papers
No similar papers found.