🤖 AI Summary
This paper addresses the limited interpretability and parameter redundancy of machine learning (ML) models by proposing a novel paradigm: modeling neural networks as relaxed discrete dynamical systems. Methodologically, it establishes a mapping between network weights and physical information propagation processes, where forward propagation functions correspond to local attractors; integrating discrete dynamical systems theory, neural differential equations, and attractor analysis, it systematically uncovers deep structural correspondences between ML models and physical dynamical systems for the first time. The contributions are threefold: (1) it endows model weights with explicit physical semantics, significantly enhancing interpretability; (2) it provides a theoretical foundation and modeling framework for designing low-parameter, highly interpretable neural architectures; and (3) it advances the development of physics-inspired compact algorithms.
📝 Abstract
We highlight a formal and substantial analogy between Machine Learning (ML) algorithms and discrete dynamical systems (DDS) in relaxation form. The analogy offers a transparent interpretation of the weights in terms of physical information-propagation processes and identifies the model function of the forward ML step with the local attractor of the corresponding discrete dynamics. Besides improving the explainability of current ML applications, this analogy may also facilitate the development of a new class ML algorithms with a reduced number of weights.