Algorithmic Simplification of Neural Networks with Mosaic-of-Motifs

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates why deep neural networks are highly compressible by revealing that their trained parameters exhibit low Kolmogorov complexity from the perspective of algorithmic information theory. To explicitly link Kolmogorov complexity with model compression, the authors propose a novel “Mosaic-of-Motifs” parameterization framework. This approach constrains network architecture through block-wise parameter modeling, construction of a reusable motif dictionary, and a mosaic reuse pattern that enforces structural regularity. Experimental results demonstrate that the method significantly enhances compressibility while preserving model performance, thereby empirically validating the intrinsic structural regularity and low algorithmic complexity of trained network parameters.

Technology Category

Application Category

📝 Abstract
Large-scale deep learning models are well-suited for compression. Methods like pruning, quantization, and knowledge distillation have been used to achieve massive reductions in the number of model parameters, with marginal performance drops across a variety of architectures and tasks. This raises the central question: \emph{Why are deep neural networks suited for compression?} In this work, we take up the perspective of algorithmic complexity to explain this behavior. We hypothesize that the parameters of trained models have more structure and, hence, exhibit lower algorithmic complexity compared to the weights at (random) initialization. Furthermore, that model compression methods harness this reduced algorithmic complexity to compress models. Although an unconstrained parameterization of model weights, $\mathbf{w} \in \mathbb{R}^n$, can represent arbitrary weight assignments, the solutions found during training exhibit repeatability and structure, making them algorithmically simpler than a generic program. To this end, we formalize the Kolmogorov complexity of $\mathbf{w}$ by $\mathcal{K}(\mathbf{w})$. We introduce a constrained parameterization $\widehat{\mathbf{w}}$, that partitions parameters into blocks of size $s$, and restricts each block to be selected from a set of $k$ reusable motifs, specified by a reuse pattern (or mosaic). The resulting method, $\textit{Mosaic-of-Motifs}$ (MoMos), yields algorithmically simpler model parameterization compared to unconstrained models. Empirical evidence from multiple experiments shows that the algorithmic complexity of neural networks, measured using approximations to Kolmogorov complexity, can be reduced during training. This results in models that perform comparably with unconstrained models while being algorithmically simpler.
Problem

Research questions and friction points this paper is trying to address.

algorithmic complexity
Kolmogorov complexity
model compression
neural networks
parameter structure
Innovation

Methods, ideas, or system contributions that make the work stand out.

algorithmic complexity
Kolmogorov complexity
model compression
Mosaic-of-Motifs
structured parameterization
🔎 Similar Papers
No similar papers found.
Pedram Bakhtiarifard
Pedram Bakhtiarifard
PhD Student
Resource Efficient Machine LearningNeural Architecture SearchMulti-Objective Optimization
T
Tong Chen
Department of Computer Science, University of Copenhagen, Denmark
J
Jonathan Wenshøj
Department of Computer Science, University of Copenhagen, Denmark
E
Erik B Dam
Department of Computer Science, University of Copenhagen, Denmark
Raghavendra Selvan
Raghavendra Selvan
Assistant Professor (TT), University of Copenhagen
Sustainable AIEfficient Machine LearningMedical Image AnalysisAI for Sciences