🤖 AI Summary
This work investigates the causes and mechanisms underlying emergent learning in neural networks, focusing on how task complexity, network architecture, and optimization dynamics jointly induce training instability. We propose a one-dimensional analytical dynamical model that— for the first time—identifies “ghost points”: non-bifurcative critical points that destabilize the loss landscape and trigger abrupt performance transitions. Two key landscape features are characterized: learning-dead zones (regions of vanishing gradient norm) and oscillatory minima. We theoretically establish that controlled uncertainty injection and parameter redundancy design significantly improve convergence robustness. A closed-form critical learning rate is derived; RNN experiments confirm that ghost points emerge prior to emergent learning onset. Furthermore, reducing output confidence or increasing trainable rank effectively mitigates learning stagnation.
📝 Abstract
emph{Abrupt learning} is commonly observed in neural networks, where long plateaus in network performance are followed by rapid convergence to a desirable solution. Yet, despite its common occurrence, the complex interplay of task, network architecture, and learning rule has made it difficult to understand the underlying mechanisms. Here, we introduce a minimal dynamical system trained on a delayed-activation task and demonstrate analytically how even a one-dimensional system can exhibit abrupt learning through ghost points rather than bifurcations. Through our toy model, we show that the emergence of a ghost point destabilizes learning dynamics. We identify a critical learning rate that prevents learning through two distinct loss landscape features: a no-learning zone and an oscillatory minimum. Testing these predictions in recurrent neural networks (RNNs), we confirm that ghost points precede abrupt learning and accompany the destabilization of learning. We demonstrate two complementary remedies: lowering the model output confidence prevents the network from getting stuck in no-learning zones, while increasing trainable ranks beyond task requirements ( extit{i.e.}, adding sloppy parameters) provides more stable learning trajectories. Our model reveals a bifurcation-free mechanism for abrupt learning and illustrates the importance of both deliberate uncertainty and redundancy in stabilizing learning dynamics.