Information Shapes Koopman Representation

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Koopman operator modeling suffers from the infinite-dimensional nature of dynamical systems, posing a fundamental challenge in learning finite-dimensional representations: latent variables struggle to balance expressivity and compactness—essentially confronting an information bottleneck (IB) trade-off. To address this, we propose IB-Koopman, a principled framework that explicitly optimizes representation compactness via mutual information as a Lagrangian constraint, while enforcing modal diversity through von Neumann entropy regularization to prevent latent space collapse. The method integrates variational mutual information estimation with a deep Koopman network, enabling end-to-end differentiable learning. Extensive evaluation across diverse dynamical systems demonstrates significant improvements in prediction accuracy, representation stability, and interpretability. Theoretical analysis corroborates empirical findings, and visualization confirms the emergence of physically meaningful, disentangled modes. Code is publicly available.

Technology Category

Application Category

📝 Abstract
The Koopman operator provides a powerful framework for modeling dynamical systems and has attracted growing interest from the machine learning community. However, its infinite-dimensional nature makes identifying suitable finite-dimensional subspaces challenging, especially for deep architectures. We argue that these difficulties come from suboptimal representation learning, where latent variables fail to balance expressivity and simplicity. This tension is closely related to the information bottleneck (IB) dilemma: constructing compressed representations that are both compact and predictive. Rethinking Koopman learning through this lens, we demonstrate that latent mutual information promotes simplicity, yet an overemphasis on simplicity may cause latent space to collapse onto a few dominant modes. In contrast, expressiveness is sustained by the von Neumann entropy, which prevents such collapse and encourages mode diversity. This insight leads us to propose an information-theoretic Lagrangian formulation that explicitly balances this tradeoff. Furthermore, we propose a new algorithm based on the Lagrangian formulation that encourages both simplicity and expressiveness, leading to a stable and interpretable Koopman representation. Beyond quantitative evaluations, we further visualize the learned manifolds under our representations, observing empirical results consistent with our theoretical predictions. Finally, we validate our approach across a diverse range of dynamical systems, demonstrating improved performance over existing Koopman learning methods. The implementation is publicly available at https://github.com/Wenxuan52/InformationKoopman.
Problem

Research questions and friction points this paper is trying to address.

Balancing expressivity and simplicity in Koopman representation learning
Preventing latent space collapse while maintaining mode diversity
Developing an information-theoretic framework for stable Koopman models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Balancing latent simplicity and expressiveness via information theory
Using von Neumann entropy to prevent mode collapse
Proposing Lagrangian formulation for stable Koopman representation
🔎 Similar Papers
No similar papers found.