🤖 AI Summary
This work investigates how the brain achieves rapid task switching and persistent memory maintenance in dynamic environments. We propose a linear gated network model featuring gating units with fast timescales, non-negativity, and boundedness; these units co-evolve with synaptic weights via joint gradient-based optimization, driving both weight modularization and task-specific gating representations. We uncover, for the first time, a “protective specialization” mechanism: the gating layer selectively activates weight submodules to abstract task-relevant computations without globally resetting parameters. Using curriculum learning and neural dynamical dimensionality reduction, we demonstrate that the model supports cross-task compositional generalization and exhibits accelerated switching speed as curriculum complexity increases. The theoretical mechanism remains robust in nonlinear dual-task settings. Our findings establish a computationally grounded framework for understanding the modular adaptability underlying biological intelligence.
📝 Abstract
Animals survive in dynamic environments changing at arbitrary timescales, but such data distribution shifts are a challenge to neural networks. To adapt to change, neural systems may change a large number of parameters, which is a slow process involving forgetting past information. In contrast, animals leverage distribution changes to segment their stream of experience into tasks and associate them with internal task abstracts. Animals can then respond flexibly by selecting the appropriate task abstraction. However, how such flexible task abstractions may arise in neural systems remains unknown. Here, we analyze a linear gated network where the weights and gates are jointly optimized via gradient descent, but with neuron-like constraints on the gates including a faster timescale, nonnegativity, and bounded activity. We observe that the weights self-organize into modules specialized for tasks or sub-tasks encountered, while the gates layer forms unique representations that switch the appropriate weight modules (task abstractions). We analytically reduce the learning dynamics to an effective eigenspace, revealing a virtuous cycle: fast adapting gates drive weight specialization by protecting previous knowledge, while weight specialization in turn increases the update rate of the gating layer. Task switching in the gating layer accelerates as a function of curriculum block size and task training, mirroring key findings in cognitive neuroscience. We show that the discovered task abstractions support generalization through both task and subtask composition, and we extend our findings to a non-linear network switching between two tasks. Overall, our work offers a theory of cognitive flexibility in animals as arising from joint gradient descent on synaptic and neural gating in a neural network architecture.