🤖 AI Summary
Traditional artificial neural networks employ static weights, limiting adaptability to dynamic inputs, whereas biological neurons exhibit input-dependent plasticity. To bridge this gap, we propose a biologically inspired adaptive neuron mechanism: each synaptic weight is modeled as an input-dependent, differentiable, learnable function, efficiently and stably parameterized via Chebyshev polynomial expansion—enabling end-to-end training. This approach preserves the standard MLP architecture while constituting its rigorous mathematical generalization, ensuring both architectural compatibility and neurobiological plausibility. Evaluated across 145 benchmark datasets, it outperforms baseline MLPs on 121 tasks and matches performance on 24. Notably, it substantially improves generalization and robustness on nonlinear and dynamic tasks. Our core contribution is the first explicit parameterization of neuronal weights as input-driven, differentiable functions—leveraging orthogonal polynomials to achieve high representational capacity while mitigating overfitting.
📝 Abstract
Traditional neural networks employ fixed weights during inference, limiting their ability to adapt to changing input conditions, unlike biological neurons that adjust signal strength dynamically based on stimuli. This discrepancy between artificial and biological neurons constrains neural network flexibility and adaptability. To bridge this gap, we propose a novel framework for adaptive neural networks, where neuron weights are modeled as functions of the input signal, allowing the network to adjust dynamically in real-time. Importantly, we achieve this within the same traditional architecture of an Artificial Neural Network, maintaining structural familiarity while introducing dynamic adaptability. In our research, we apply Chebyshev polynomials as one of the many possible decomposition methods to achieve this adaptive weighting mechanism, with polynomial coefficients learned during training. Out of the 145 datasets tested, our adaptive Chebyshev neural network demonstrated a marked improvement over an equivalent MLP in approximately 8% of cases, performing strictly better on 121 datasets. In the remaining 24 datasets, the performance of our algorithm matched that of the MLP, highlighting its ability to generalize standard neural network behavior while offering enhanced adaptability. As a generalized form of the MLP, this model seamlessly retains MLP performance where needed while extending its capabilities to achieve superior accuracy across a wide range of complex tasks. These results underscore the potential of adaptive neurons to enhance generalization, flexibility, and robustness in neural networks, particularly in applications with dynamic or non-linear data dependencies.