🤖 AI Summary
Hybrid CNN-ViT architectures for tinyML image classification suffer from excessive parameter counts and computational overhead, hindering deployment on resource-constrained edge devices.
Method: We propose the first NAS search space tailored for tinyML that jointly models convolutional local feature extraction, Transformer-based global contextual modeling, and a searchable pooling module—enabling efficient feature map compression and structural self-adaptation. A lightweight NAS strategy automatically discovers optimal subnetworks under strict parameter constraints (<100K).
Contribution/Results: On CIFAR-10, the discovered model achieves 1.2% higher accuracy than ResNet-18, 2.3× faster inference, and 47% fewer parameters. It significantly balances accuracy, latency, and memory footprint, establishing a new paradigm for efficient vision understanding at the edge.
📝 Abstract
Hybrids of Convolutional Neural Network (CNN) and Vision Transformer (ViT) have outperformed pure CNN or ViT architecture. However, since these architectures require large parameters and incur large computational costs, they are unsuitable for tinyML deployment. This paper introduces a new hybrid CNN-ViT search space for Neural Architecture Search (NAS) to find efficient hybrid architectures for image classification. The search space covers hybrid CNN and ViT blocks to learn local and global information, as well as the novel Pooling block of searchable pooling layers for efficient feature map reduction. Experimental results on the CIFAR10 dataset show that our proposed search space can produce hybrid CNN-ViT architectures with superior accuracy and inference speed to ResNet-based tinyML models under tight model size constraints.