🤖 AI Summary
The Minimum Description Length (MDL) principle—central to statistical learning and information theory—lacks a rigorous formalization and quantifiable evaluation framework within variational autoencoders (VAEs).
Method: We propose Spectrum VAE, the first theoretically grounded framework enabling analytic MDL computation in VAEs. It explicitly embeds MDL into both architectural design and training objectives by introducing the “spectral mode” concept, rigorously characterizing its role in information compression. Through joint spectral analysis and variational inference, it enables explicit, dimension-wise quantification of MDL contributions from latent subspaces.
Contributions: (1) First rigorous definition and closed-form computation of MDL for VAEs; (2) Proof that MDL minimization is equivalent to optimal understanding of the underlying data distribution; (3) Establishment of “understanding as efficient information compression” as a foundational principle, providing a theoretical basis for information-driven deep generative modeling.
📝 Abstract
Deep neural networks (DNNs) trained through end-to-end learning have achieved remarkable success across diverse machine learning tasks, yet they are not explicitly designed to adhere to the Minimum Description Length (MDL) principle, which posits that the best model provides the shortest description of the data. In this paper, we argue that MDL is essential to deep learning and propose a further generalized principle: Understanding is the use of a small amount of information to represent a large amount of information. To this end, we introduce a novel theoretical framework for designing and evaluating deep Variational Autoencoders (VAEs) based on MDL. In our theory, we designed the Spectrum VAE, a specific VAE architecture whose MDL can be rigorously evaluated under given conditions. Additionally, we introduce the concept of latent dimension combination, or pattern of spectrum, and provide the first theoretical analysis of their role in achieving MDL. We claim that a Spectrum VAE understands the data distribution in the most appropriate way when the MDL is achieved. This work is entirely theoretical and lays the foundation for future research on designing deep learning systems that explicitly adhere to information-theoretic principles.