🤖 AI Summary
Neural networks such as Transformers lack theoretically grounded measures of model complexity, hindering principled model selection and compression.
Method: Grounded in the Minimum Description Length (MDL) principle and Kolmogorov complexity, we establish the first asymptotically optimal description length objective for Transformers and construct the first MDL framework with computational universality guarantees. We propose a differentiable, optimization-friendly variational objective using an adaptive Gaussian mixture prior to approximate MDL.
Contribution/Results: This work introduces the first theoretically sound, Transformer-specific MDL-based complexity measure. Empirical evaluation confirms that the proposed objective favors low-complexity models with strong generalization performance. However, it also exposes a critical practical limitation: standard optimizers struggle to converge from random initialization. Overall, our framework provides a novel information-theoretic foundation for model selection and compression in deep learning, bridging theoretical guarantees with practical neural architecture design.
📝 Abstract
The Minimum Description Length (MDL) principle offers a formal framework for applying Occam's razor in machine learning. However, its application to neural networks such as Transformers is challenging due to the lack of a principled, universal measure for model complexity. This paper introduces the theoretical notion of asymptotically optimal description length objectives, grounded in the theory of Kolmogorov complexity. We establish that a minimizer of such an objective achieves optimal compression, for any dataset, up to an additive constant, in the limit as model resource bounds increase. We prove that asymptotically optimal objectives exist for Transformers, building on a new demonstration of their computational universality. We further show that such objectives can be tractable and differentiable by constructing and analyzing a variational objective based on an adaptive Gaussian mixture prior. Our empirical analysis shows that this variational objective selects for a low-complexity solution with strong generalization on an algorithmic task, but standard optimizers fail to find such solutions from a random initialization, highlighting key optimization challenges. More broadly, by providing a theoretical framework for identifying description length objectives with strong asymptotic guarantees, we outline a potential path towards training neural networks that achieve greater compression and generalization.