đ¤ AI Summary
It remains unclear whether cognitive evolution depends on qualitative shifts in information flow induced by major topological reconfigurations of neural architectures.
Method: We systematically compared feedforward, recurrent, and hierarchical artificial neural networksâmatched for scale and computational resourcesâon an artificial grammar learning task spanning varying syntactic complexities.
Contribution/Results: Only recurrent networks exhibited a qualitative performance transitionâabrupt emergence of stable generalizationâfrom failure to success under high-complexity grammars. This transition coincided with sharply increased training difficulty and stochastic, irreversible convergence dynamicsâparalleling evolutionary âcritical transitionsâ or fitness valleys. Hierarchical connectivity yielded no analogous effect. Our findings constitute the first controlled computational demonstration that topology-specific information-flow reconfiguration can independently drive qualitative expansions in cognitive capacity. This provides a testable mechanistic account of cognitive evolutionary leaps, linking network architecture to emergent computational capabilities without requiring changes in scale or learning algorithm.
đ Abstract
Transitional accounts of evolution emphasise a few changes that shape what is evolvable, with dramatic consequences for derived lineages. More recently it has been proposed that cognition might also have evolved via a series of major transitions that manipulate the structure of biological neural networks, fundamentally changing the flow of information. We used idealised models of information flow, artificial neural networks (ANNs), to evaluate whether changes in information flow in a network can yield a transitional change in cognitive performance. We compared networks with feed-forward, recurrent and laminated topologies, and tested their performance learning artificial grammars that differed in complexity, controlling for network size and resources. We documented a qualitative expansion in the types of input that recurrent networks can process compared to feed-forward networks, and a related qualitative increase in performance for learning the most complex grammars. We also noted how the difficulty in training recurrent networks poses a form of transition barrier and contingent irreversibility -- other key features of evolutionary transitions. Not all changes in network topology confer a performance advantage in this task set. Laminated networks did not outperform non-laminated networks in grammar learning. Overall, our findings show how some changes in information flow can yield transitions in cognitive performance.