Equivariant-Aware Structured Pruning for Efficient Edge Deployment: A Comprehensive Framework with Adaptive Fine-Tuning

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the tension between resource constraints on edge devices and the need for geometric robustness, this paper proposes an efficient deployment framework for group-equivariant neural networks. Methodologically, it tightly integrates C4-group equivariant convolution with equivariance-aware structured pruning, introduces a novel neuron-level pruning strategy, and incorporates adaptive fine-tuning to preserve transformation equivariance. Additionally, knowledge distillation, dynamic INT8 quantization, and learning-rate scheduling are employed to optimize deployment efficiency. Experiments on EuroSAT, CIFAR-10, and Rotated MNIST demonstrate that the framework achieves 29.3% parameter compression while substantially recovering accuracy. The resulting models are lightweight (<1M parameters), retain rotational equivariance, and exhibit enhanced geometric robustness. This work establishes a verifiable, equivariance-preserving compression paradigm tailored for edge vision tasks.

Technology Category

Application Category

📝 Abstract
This paper presents a novel framework combining group equivariant convolutional neural networks (G-CNNs) with equivariant-aware structured pruning to produce compact, transformation-invariant models for resource-constrained environments. Equivariance to rotations is achieved through the C4 cyclic group via the e2cnn library,enabling consistent performance under geometric transformations while reducing computational overhead. Our approach introduces structured pruning that preserves equivariant properties by analyzing e2cnn layer structure and applying neuron-level pruning to fully connected components. To mitigate accuracy degradation, we implement adaptive fine-tuning that automatically triggers when accuracy drop exceeds 2%, using early stopping and learning rate scheduling for efficient recovery. The framework includes dynamic INT8 quantization and a comprehensive pipeline encompassing training, knowledge distillation, structured pruning, fine-tuning, and quantization. We evaluate our method on satellite imagery (EuroSAT) and standard benchmarks (CIFAR-10, Rotated MNIST) demonstrating effectiveness across diverse domains. Experimental results show 29.3% parameter reduction with significant accuracy recovery, demonstrating that structured pruning of equivariant networks achieves substantial compression while maintaining geometric robustness. Our pipeline provides a reproducible framework for optimizing equivariant models, bridging the gap between group-theoretic network design and practical deployment constraints, with particular relevance to satellite imagery analysis and geometric vision tasks.
Problem

Research questions and friction points this paper is trying to address.

Develops compact transformation-invariant models for resource-constrained edge deployment
Preserves geometric equivariance during structured pruning to maintain transformation robustness
Mitigates accuracy degradation through adaptive fine-tuning and quantization techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Equivariant-aware structured pruning preserves transformation properties
Adaptive fine-tuning triggers automatically for accuracy recovery
Dynamic INT8 quantization and comprehensive pipeline enhance efficiency
🔎 Similar Papers
No similar papers found.