🤖 AI Summary
To address the challenge of extracting dynamical state variables from high-dimensional videos corrupted by background motion, occlusions, and texture variations, this paper proposes a two-stage latent-space learning framework. First, a spatiotemporal autoencoder based on TimeSformer is constructed to extract robust representations via global attention. Second, Lyapunov stability regularization is introduced to jointly enforce dynamic contractivity, disturbance robustness, and physical interpretability—achieving, for the first time, their unified balance in latent space. Physical variables are disentangled via linear correlation analysis, and rollout error accumulation is suppressed. Evaluated on five synthetic and four real-world dynamical systems, the method significantly outperforms CNN- and pure-Transformer-based baselines in mutual information, intrinsic dimension estimation, and long-horizon prediction accuracy, while exhibiting invariance to background disturbances.
📝 Abstract
Extracting the true dynamical variables of a system from high-dimensional video is challenging due to distracting visual factors such as background motion, occlusions, and texture changes. We propose LyTimeT, a two-phase framework for interpretable variable extraction that learns robust and stable latent representations of dynamical systems. In Phase 1, LyTimeT employs a spatio-temporal TimeSformer-based autoencoder that uses global attention to focus on dynamically relevant regions while suppressing nuisance variation, enabling distraction-robust latent state learning and accurate long-horizon video prediction. In Phase 2, we probe the learned latent space, select the most physically meaningful dimensions using linear correlation analysis, and refine the transition dynamics with a Lyapunov-based stability regularizer to enforce contraction and reduce error accumulation during roll-outs. Experiments on five synthetic benchmarks and four real-world dynamical systems, including chaotic phenomena, show that LyTimeT achieves mutual information and intrinsic dimension estimates closest to ground truth, remains invariant under background perturbations, and delivers the lowest analytical mean squared error among CNN-based (TIDE) and transformer-only baselines. Our results demonstrate that combining spatio-temporal attention with stability constraints yields predictive models that are not only accurate but also physically interpretable.