Bridging Kolmogorov Complexity and Deep Learning: Asymptotically Optimal Description Length Objectives for Transformers

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural networks such as Transformers lack theoretically grounded measures of model complexity, hindering principled model selection and compression. Method: Grounded in the Minimum Description Length (MDL) principle and Kolmogorov complexity, we establish the first asymptotically optimal description length objective for Transformers and construct the first MDL framework with computational universality guarantees. We propose a differentiable, optimization-friendly variational objective using an adaptive Gaussian mixture prior to approximate MDL. Contribution/Results: This work introduces the first theoretically sound, Transformer-specific MDL-based complexity measure. Empirical evaluation confirms that the proposed objective favors low-complexity models with strong generalization performance. However, it also exposes a critical practical limitation: standard optimizers struggle to converge from random initialization. Overall, our framework provides a novel information-theoretic foundation for model selection and compression in deep learning, bridging theoretical guarantees with practical neural architecture design.

Technology Category

Application Category

📝 Abstract
The Minimum Description Length (MDL) principle offers a formal framework for applying Occam's razor in machine learning. However, its application to neural networks such as Transformers is challenging due to the lack of a principled, universal measure for model complexity. This paper introduces the theoretical notion of asymptotically optimal description length objectives, grounded in the theory of Kolmogorov complexity. We establish that a minimizer of such an objective achieves optimal compression, for any dataset, up to an additive constant, in the limit as model resource bounds increase. We prove that asymptotically optimal objectives exist for Transformers, building on a new demonstration of their computational universality. We further show that such objectives can be tractable and differentiable by constructing and analyzing a variational objective based on an adaptive Gaussian mixture prior. Our empirical analysis shows that this variational objective selects for a low-complexity solution with strong generalization on an algorithmic task, but standard optimizers fail to find such solutions from a random initialization, highlighting key optimization challenges. More broadly, by providing a theoretical framework for identifying description length objectives with strong asymptotic guarantees, we outline a potential path towards training neural networks that achieve greater compression and generalization.
Problem

Research questions and friction points this paper is trying to address.

Bridging Kolmogorov complexity with deep learning theory
Developing optimal description length objectives for Transformers
Addressing optimization challenges in neural network compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Asymptotically optimal description length objectives for Transformers
Tractable variational objective with adaptive Gaussian mixture prior
Theoretical framework linking Kolmogorov complexity to deep learning
🔎 Similar Papers
No similar papers found.