Auto-Compressing Networks

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep neural networks often suffer from computational redundancy as depth increases, limiting representational quality improvements. To address this, we propose Auto-Compressing Networks (ACNs), which replace residual short connections with long-range additive feedforward connections to establish a gradient-driven, adaptive information compression mechanism. This mechanism dynamically strengthens shallow-layer representations while identifying and compressing redundant features in deeper layers during training. For the first time, it theoretically characterizes inter-layer dynamic training patterns. ACNs significantly enhance noise robustness, few-shot generalization, and cross-task transferability, while mitigating catastrophic forgetting. Integrated with pruning, ACNs achieve 30–80% structural compression on ViT, MLP-Mixer, and BERT, reduce forgetting rates by 18%, preserve full accuracy, and consistently outperform baselines across the sparsity–performance trade-off curve.

Technology Category

Application Category

📝 Abstract
Deep neural networks with short residual connections have demonstrated remarkable success across domains, but increasing depth often introduces computational redundancy without corresponding improvements in representation quality. In this work, we introduce Auto-Compressing Networks (ACNs), an architectural variant where additive long feedforward connections from each layer to the output replace traditional short residual connections. ACNs showcase a unique property we coin as"auto-compression", the ability of a network to organically compress information during training with gradient descent, through architectural design alone. Through auto-compression, information is dynamically"pushed"into early layers during training, enhancing their representational quality and revealing potential redundancy in deeper ones. We theoretically show that this property emerges from layer-wise training patterns present in ACNs, where layers are dynamically utilized during training based on task requirements. We also find that ACNs exhibit enhanced noise robustness compared to residual networks, superior performance in low-data settings, improved transfer learning capabilities, and mitigate catastrophic forgetting suggesting that they learn representations that generalize better despite using fewer parameters. Our results demonstrate up to 18% reduction in catastrophic forgetting and 30-80% architectural compression while maintaining accuracy across vision transformers, MLP-mixers, and BERT architectures. Furthermore, we demonstrate that coupling ACNs with traditional pruning techniques, enables significantly better sparsity-performance trade-offs compared to conventional architectures. These findings establish ACNs as a practical approach to developing efficient neural architectures that automatically adapt their computational footprint to task complexity, while learning robust representations.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational redundancy in deep neural networks without losing representation quality.
Enhancing noise robustness and performance in low-data settings for neural networks.
Mitigating catastrophic forgetting and improving transfer learning capabilities in deep learning models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Auto-Compressing Networks replace short residual connections
ACNs dynamically push information into early layers
ACNs enable architectural compression and noise robustness