Exploring Major Transitions in the Evolution of Biological Cognition With Artificial Neural Networks

📅 2025-09-17
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
It remains unclear whether cognitive evolution depends on qualitative shifts in information flow induced by major topological reconfigurations of neural architectures. Method: We systematically compared feedforward, recurrent, and hierarchical artificial neural networks—matched for scale and computational resources—on an artificial grammar learning task spanning varying syntactic complexities. Contribution/Results: Only recurrent networks exhibited a qualitative performance transition—abrupt emergence of stable generalization—from failure to success under high-complexity grammars. This transition coincided with sharply increased training difficulty and stochastic, irreversible convergence dynamics—paralleling evolutionary “critical transitions” or fitness valleys. Hierarchical connectivity yielded no analogous effect. Our findings constitute the first controlled computational demonstration that topology-specific information-flow reconfiguration can independently drive qualitative expansions in cognitive capacity. This provides a testable mechanistic account of cognitive evolutionary leaps, linking network architecture to emergent computational capabilities without requiring changes in scale or learning algorithm.

Technology Category

Application Category

📝 Abstract
Transitional accounts of evolution emphasise a few changes that shape what is evolvable, with dramatic consequences for derived lineages. More recently it has been proposed that cognition might also have evolved via a series of major transitions that manipulate the structure of biological neural networks, fundamentally changing the flow of information. We used idealised models of information flow, artificial neural networks (ANNs), to evaluate whether changes in information flow in a network can yield a transitional change in cognitive performance. We compared networks with feed-forward, recurrent and laminated topologies, and tested their performance learning artificial grammars that differed in complexity, controlling for network size and resources. We documented a qualitative expansion in the types of input that recurrent networks can process compared to feed-forward networks, and a related qualitative increase in performance for learning the most complex grammars. We also noted how the difficulty in training recurrent networks poses a form of transition barrier and contingent irreversibility -- other key features of evolutionary transitions. Not all changes in network topology confer a performance advantage in this task set. Laminated networks did not outperform non-laminated networks in grammar learning. Overall, our findings show how some changes in information flow can yield transitions in cognitive performance.
Problem

Research questions and friction points this paper is trying to address.

Evaluating network topology changes on cognitive performance
Comparing feed-forward, recurrent, laminated neural networks
Testing artificial grammar learning across network architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Used artificial neural networks to model cognitive evolution
Compared feed-forward, recurrent, and laminated network topologies
Found recurrent networks enable qualitative performance improvements
🔎 Similar Papers
No similar papers found.
Konstantinos Voudouris
Konstantinos Voudouris
Postdoctoral Research Scientist, Helmholtz Munich
AI EvaluationCognitive SciencePhilosophy of ScienceLinguistics
A
Andrew Barron
School of Natural Sciences, Macquarie University, Sydney, Australia
M
Marta Halina
Department of History and Philosophy of Science, University of Cambridge, Cambridge, UK
Colin Klein
Colin Klein
Australian National University
Philosophy
Matishalin Patel
Matishalin Patel
University of Hull
Evolutionary theoryartificial lifeAIinclusive fitness