Scale-Consistent State-Space Dynamics via Fractal of Stationary Transformations

πŸ“… 2026-01-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the lack of structural guarantees on the validity of intermediate representations in deep neural networks, which hinders principled early stopping and adaptive computation. The authors propose FROST, a novel framework that introduces fractal inductive bias into state space models. By leveraging fractal-stationary transformations and geometric contraction analysis, FROST constructs a scale-consistent, self-similar latent dynamical manifold, enabling intermediate states at different depths to naturally correspond to multi-resolution views of the same underlying representation. This approach not only preserves alignment of the latent geometric structure throughout iterative optimization but also enables an early stopping mechanism grounded in intrinsic feature quality. Experiments on ImageNet-100 validate the model’s scale consistency and demonstrate that its adaptive efficiency stems from the aligned latent space geometry.

Technology Category

Application Category

πŸ“ Abstract
Recent deep learning models increasingly rely on depth without structural guarantees on the validity of intermediate representations, rendering early stopping and adaptive computation ill-posed. We address this limitation by formulating a structural requirement for state-space model's scale-consistent latent dynamics across iterative refinement, and derive Fractal of Stationary Transformations (FROST), which enforces a self-similar representation manifold through a fractal inductive bias. Under this geometry, intermediate states correspond to different resolutions of a shared representation, and we provide a geometric analysis establishing contraction and stable convergence across iterations. As a consequence of this scale-consistent structure, halting naturally admits a ranking-based formulation driven by intrinsic feature quality rather than extrinsic objectives. Controlled experiments on ImageNet-100 empirically verify the predicted scale-consistent behavior, showing that adaptive efficiency emerges from the aligned latent geometry.
Problem

Research questions and friction points this paper is trying to address.

scale-consistent dynamics
state-space models
adaptive computation
intermediate representations
structural guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

scale-consistent dynamics
fractal inductive bias
state-space models
self-similar representation
adaptive computation
G
Geunhyeok Yu
Department of Software Convergence, Kyung Hee University
Hyoseok Hwang
Hyoseok Hwang
Kyung Hee University
computer visionmachine learningrobotics