🤖 AI Summary
This paper addresses the challenging problem of identifying latent dynamical systems from nonlinear observations. We propose Dynamics Contrastive Learning (DCL), the first framework to theoretically establish that self-supervised contrastive learning enables identifiable recovery of latent dynamics. Methodologically, DCL operates in a fully unsupervised manner—requiring neither labels nor prior assumptions about dynamical structure—and disentangles linear, switched-linear, and nonlinear (including chaotic) latent dynamics by constructing dynamics-consistent positive and negative sample pairs directly from nonlinear observational data. Key contributions include: (1) establishing a rigorous theoretical connection between self-supervised learning and causal generative factor disentanglement; (2) providing the first identifiability guarantee for self-supervised learning–driven system identification; and (3) demonstrating high-fidelity reconstruction across diverse dynamical regimes on both synthetic and benchmark dynamical datasets.
📝 Abstract
Self-supervised learning (SSL) approaches have brought tremendous success across many tasks and domains. It has been argued that these successes can be attributed to a link between SSL and identifiable representation learning: Temporal structure and auxiliary variables ensure that latent representations are related to the true underlying generative factors of the data. Here, we deepen this connection and show that SSL can perform system identification in latent space. We propose dynamics contrastive learning, a framework to uncover linear, switching linear and non-linear dynamics under a non-linear observation model, give theoretical guarantees and validate them empirically.