🤖 AI Summary
Modern vision models—particularly CNNs and Transformers—exhibit a bias toward simplicity and suffer from channel-wise feature redundancy, limiting both representational capacity and computational efficiency. To address these issues, we propose SpaRTAN, a lightweight network that jointly optimizes spatial modeling and channel redundancy suppression. Specifically, SpaRTAN integrates deformable-receptive-field convolutions for multi-order dynamic spatial representation with wavelet-based channel aggregation to compress redundant channel information. Additionally, we introduce tokenized feature enhancement and a lightweight MLP to balance feature diversity and inference efficiency. Evaluated on ImageNet-1K, SpaRTAN achieves 77.7% top-1 accuracy with only 3.8M parameters; on COCO object detection, it attains 50.0% AP using just 21.5M parameters—outperforming existing lightweight architectures. Our work establishes a new paradigm for efficient visual recognition by synergistically enhancing spatial expressiveness and channel efficiency within a compact design.
📝 Abstract
The resurgence of convolutional neural networks (CNNs) in visual recognition tasks, exemplified by ConvNeXt, has demonstrated their capability to rival transformer-based architectures through advanced training methodologies and ViT-inspired design principles. However, both CNNs and transformers exhibit a simplicity bias, favoring straightforward features over complex structural representations. Furthermore, modern CNNs often integrate MLP-like blocks akin to those in transformers, but these blocks suffer from significant information redundancies, necessitating high expansion ratios to sustain competitive performance. To address these limitations, we propose SpaRTAN, a lightweight architectural design that enhances spatial and channel-wise information processing. SpaRTAN employs kernels with varying receptive fields, controlled by kernel size and dilation factor, to capture discriminative multi-order spatial features effectively. A wave-based channel aggregation module further modulates and reinforces pixel interactions, mitigating channel-wise redundancies. Combining the two modules, the proposed network can efficiently gather and dynamically contextualize discriminative features. Experimental results in ImageNet and COCO demonstrate that SpaRTAN achieves remarkable parameter efficiency while maintaining competitive performance. In particular, on the ImageNet-1k benchmark, SpaRTAN achieves 77. 7% accuracy with only 3.8M parameters and approximately 1.0 GFLOPs, demonstrating its ability to deliver strong performance through an efficient design. On the COCO benchmark, it achieves 50.0% AP, surpassing the previous benchmark by 1.2% with only 21.5M parameters. The code is publicly available at [https://github.com/henry-pay/SpaRTAN].