🤖 AI Summary
This work addresses the unclear nature of how neural networks internally represent the underlying geometric structure of complex dynamical systems. To resolve rotational and scale ambiguities in latent spaces, the authors propose an anchor-based, geometry-agnostic relative embedding method, establishing a reproducible framework for relative geometric analysis. Through systematic experiments across seven canonical dynamical systems using MLPs, RNNs, Transformers, and echo state networks, they find that MLPs and RNNs exhibit highly aligned internal representations, whereas Transformers and echo state networks achieve high predictive accuracy despite weaker representational alignment. This reveals that high prediction accuracy can coexist with low representational alignment, thereby uncovering a nuanced relationship between alignment and predictive performance.
📝 Abstract
Neural networks can accurately forecast complex dynamical systems, yet how they internally represent underlying latent geometry remains poorly understood. We study neural forecasters through the lens of representational alignment, introducing anchor-based, geometry-agnostic relative embeddings that remove rotational and scaling ambiguities in latent spaces. Applying this framework across seven canonical dynamical systems - ranging from periodic to chaotic - we reveal reproducible family-level structure: multilayer perceptrons align with other MLPs, recurrent networks with RNNs, while transformers and echo-state networks achieve strong forecasts despite weaker alignment. Alignment generally correlates with forecasting accuracy, yet high accuracy can coexist with low alignment. Relative geometry thus provides a simple, reproducible foundation for comparing how model families internalize and represent dynamical structure.