KPFlow: An Operator Perspective on Dynamic Collapse Under Gradient Descent Training of Recurrent Networks

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of theoretical understanding regarding gradient-descent-induced neural collapse and the emergence of low-dimensional latent dynamics in recurrent dynamical systems (e.g., RNNs, Neural ODEs, GRUs). We propose a novel operator decomposition framework that factorizes the gradient flow into the product of a structure-dominant parameter operator $K$ and a linearized propagation operator $P$. This decomposition unifies the analysis of dynamic collapse and multi-task objective alignment for the first time, establishing an analytically tractable theory for representation learning in nonlinear recurrent models. Integrating insights from the neural tangent kernel, Lyapunov stability theory, and optimal control, we develop KPFlow—a publicly available PyTorch toolkit enabling dynamic quantitative analysis across general recurrent architectures. Both theoretical analysis and empirical experiments confirm the decisive roles of $K$ and $P$ in single-task collapse and multi-task coordination. Our framework provides the first generalizable, computationally feasible paradigm for analyzing dynamic representations in recurrent networks.

Technology Category

Application Category

📝 Abstract
Gradient Descent (GD) and its variants are the primary tool for enabling efficient training of recurrent dynamical systems such as Recurrent Neural Networks (RNNs), Neural ODEs and Gated Recurrent units (GRUs). The dynamics that are formed in these models exhibit features such as neural collapse and emergence of latent representations that may support the remarkable generalization properties of networks. In neuroscience, qualitative features of these representations are used to compare learning in biological and artificial systems. Despite recent progress, there remains a need for theoretical tools to rigorously understand the mechanisms shaping learned representations, especially in finite, non-linear models. Here, we show that the gradient flow, which describes how the model's dynamics evolve over GD, can be decomposed into a product that involves two operators: a Parameter Operator, K, and a Linearized Flow Propagator, P. K mirrors the Neural Tangent Kernel in feed-forward neural networks, while P appears in Lyapunov stability and optimal control theory. We demonstrate two applications of our decomposition. First, we show how their interplay gives rise to low-dimensional latent dynamics under GD, and, specifically, how the collapse is a result of the network structure, over and above the nature of the underlying task. Second, for multi-task training, we show that the operators can be used to measure how objectives relevant to individual sub-tasks align. We experimentally and theoretically validate these findings, providing an efficient Pytorch package, emph{KPFlow}, implementing robust analysis tools for general recurrent architectures. Taken together, our work moves towards building a next stage of understanding of GD learning in non-linear recurrent models.
Problem

Research questions and friction points this paper is trying to address.

Understanding gradient descent dynamics in recurrent networks
Analyzing latent representation formation under training
Measuring multi-task objective alignment in learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes gradient flow into K and P operators
Explains low-dimensional latent dynamics under GD
Measures multi-task objective alignment via operators
🔎 Similar Papers
No similar papers found.
J
James Hazelden
Department of Applied Mathematics, University of Washington, Seattle, WA 98195; Allen Institute, Seattle, WA 98109
L
Laura Driscoll
Department of Neurobiology & Biophysics, University of Washington, Seattle, WA 98195; Allen Institute, Seattle, WA 98109
Eli Shlizerman
Eli Shlizerman
Associate Professor of AMATH & ECE, WRF Professor, University of Washington
Neural NetworksAI systemsSystems NeuroscienceData-driven ModelingDynamical Systems
E
Eric Shea-Brown
Department of Applied Mathematics, University of Washington, Seattle, WA 98195; Allen Institute, Seattle, WA 98109