What should a neuron aim for? Designing local objective functions based on information theory

📅 2024-12-03
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Artificial neurons lack biologically plausible local learning objectives, hindering autonomous, interpretable, and robust information processing. Method: This paper introduces a neuron-level local objective function grounded in Partial Information Decomposition (PID), enabling each neuron to autonomously quantify the unique, redundant, and synergistic information contributions of feedforward, feedback, and lateral inputs to its output—thereby supporting task-driven selection of information integration strategies. Contribution/Results: As the first work to incorporate PID into local learning objective design, our approach enables interpretable, gradient-free local optimization while preserving high performance and mechanistic transparency. Experiments demonstrate that the method maintains model accuracy while significantly enhancing neuron-level interpretability and self-organizing capability. By decoupling learning from global error signals, it establishes a novel paradigm for brain-inspired adaptive learning.

Technology Category

Application Category

📝 Abstract
In modern deep neural networks, the learning dynamics of the individual neurons is often obscure, as the networks are trained via global optimization. Conversely, biological systems build on self-organized, local learning, achieving robustness and efficiency with limited global information. We here show how self-organization between individual artificial neurons can be achieved by designing abstract bio-inspired local learning goals. These goals are parameterized using a recent extension of information theory, Partial Information Decomposition (PID), which decomposes the information that a set of information sources holds about an outcome into unique, redundant and synergistic contributions. Our framework enables neurons to locally shape the integration of information from various input classes, i.e. feedforward, feedback, and lateral, by selecting which of the three inputs should contribute uniquely, redundantly or synergistically to the output. This selection is expressed as a weighted sum of PID terms, which, for a given problem, can be directly derived from intuitive reasoning or via numerical optimization, offering a window into understanding task-relevant local information processing. Achieving neuron-level interpretability while enabling strong performance using local learning, our work advances a principled information-theoretic foundation for local learning strategies.
Problem

Research questions and friction points this paper is trying to address.

Artificial Neurons
Local Learning Objectives
Biological Intelligence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Partial Information Decomposition
Biologically-inspired Learning Objectives
Autonomous Neural Development
🔎 Similar Papers