🤖 AI Summary
This work addresses the reliance of generative models on external stochastic noise by proposing a novel generative paradigm grounded in the intrinsic dynamics of high-dimensional chaotic systems. Methodologically, it replaces conventional random noise with deterministic yet unpredictable state evolution from such systems, integrating standard neural network architectures with a dynamic readout mechanism to synthesize new samples whose statistical properties match those of the training data. The core contribution is the first demonstration of high-dimensional chaotic activity as an endogenous generative source—eliminating dependence on external noise entirely. Experiments across multiple benchmark tasks show that the generated samples achieve quality comparable to state-of-the-art generative models, thereby validating the effectiveness, feasibility, and distinctive advantages of chaotic dynamics as a generative mechanism.
📝 Abstract
Generative modeling aims at producing new datapoints whose statistical properties resemble the ones in a training dataset. In recent years, there has been a burst of machine learning techniques and settings that can achieve this goal with remarkable performances. In most of these settings, one uses the training dataset in conjunction with noise, which is added as a source of statistical variability and is essential for the generative task. Here, we explore the idea of using internal chaotic dynamics in high-dimensional chaotic systems as a way to generate new datapoints from a training dataset. We show that simple learning rules can achieve this goal within a set of vanilla architectures and characterize the quality of the generated datapoints through standard accuracy measures.