Alternating Gradient Flow Utility: A Unified Metric for Structural Pruning and Dynamic Routing in Deep Networks

๐Ÿ“… 2026-03-12
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limitations of existing structured pruning and dynamic routing methods, which are often biased by weight or activation magnitudes and struggle to preserve critical functional pathways, leading to performance degradation or inefficiency. The authors propose a decoupled dynamics paradigm based on Alternating Gradient Flow (AGF), leveraging absolute Taylor expansion in feature space to precisely quantify structural โ€œkinetic utilityโ€ and jointly guide pruning and routing decisions. A novel gradient-magnitude decoupling analysis reveals topological phase transitions under extreme sparsity and identifies sparse bottlenecks in Vision Transformers (ViTs). Furthermore, they introduce a hybrid dynamic routing framework that integrates offline AGF-based search with online zero-cost physical priors. Experiments demonstrate that the method avoids structural collapse at 75% compression on ImageNet-1K, significantly outperforming conventional metrics, and reduces heavy-expert usage by approximately 50% on ImageNet-100 with 0.92ร— computational cost and no accuracy loss.

Technology Category

Application Category

๐Ÿ“ Abstract
Efficient deep learning traditionally relies on static heuristics like weight magnitude or activation awareness (e.g., Wanda, RIA). While successful in unstructured settings, we observe a critical limitation when applying these metrics to the structural pruning of deep vision networks. These contemporary metrics suffer from a magnitude bias, failing to preserve critical functional pathways. To overcome this, we propose a decoupled kinetic paradigm inspired by Alternating Gradient Flow (AGF), utilizing an absolute feature-space Taylor expansion to accurately capture the network's structural "kinetic utility". First, we uncover a topological phase transition at extreme sparsity, where AGF successfully preserves baseline functionality and exhibits topological implicit regularization, avoiding the collapse seen in models trained from scratch. Second, transitioning to architectures without strict structural priors, we reveal a phenomenon of Sparsity Bottleneck in Vision Transformers (ViTs). Through a gradient-magnitude decoupling analysis, we discover that dynamic signals suffer from signal compression in converged models, rendering them suboptimal for real-time routing. Finally, driven by these empirical constraints, we design a hybrid routing framework that decouples AGF-guided offline structural search from online execution via zero-cost physical priors. We validate our paradigm on large-scale benchmarks: under a 75% compression stress test on ImageNet-1K, AGF effectively avoids the structural collapse where traditional metrics aggressively fall below random sampling. Furthermore, when systematically deployed for dynamic inference on ImageNet-100, our hybrid approach achieves Pareto-optimal efficiency. It reduces the usage of the heavy expert by approximately 50% (achieving an estimated overall cost of 0.92$\times$) without sacrificing the full-model accuracy.
Problem

Research questions and friction points this paper is trying to address.

structural pruning
dynamic routing
magnitude bias
sparsity bottleneck
vision transformers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Alternating Gradient Flow
Structural Pruning
Dynamic Routing
Sparsity Bottleneck
Kinetic Utility
๐Ÿ”Ž Similar Papers
No similar papers found.
T
Tianhao Qian
School of Mathematics, Southeast University, Nanjing 210096, China
Z
Zhuoxuan Li
School of Mathematics, Southeast University, Nanjing 210096, China; Systems Research Institute of the Polish Academy of Sciences, Warsaw 01-447, Poland
Jinde Cao
Jinde Cao
Academician, RAS/AE/LAS/PAS/AAS/EASA & FIEEE; Southeast University
Complex networksneural networksmulti-agent systems - engineering stability - dynamics and
Xinli Shi
Xinli Shi
ARC DECRA Fellow
Distributed LearningMulti-Agent Reinforcement LearningMPC
H
Hanjie Liu
School of Mathematics, Southeast University, Nanjing 210096, China
Leszek Rutkowski
Leszek Rutkowski
AGH University and Systems Research Institute of the Polish Academy of Sciences
artificial intelligencedata miningneural networksagent systems