Hybrid Convolution and Vision Transformer NAS Search Space for TinyML Image Classification

📅 2025-11-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Hybrid CNN-ViT architectures for tinyML image classification suffer from excessive parameter counts and computational overhead, hindering deployment on resource-constrained edge devices. Method: We propose the first NAS search space tailored for tinyML that jointly models convolutional local feature extraction, Transformer-based global contextual modeling, and a searchable pooling module—enabling efficient feature map compression and structural self-adaptation. A lightweight NAS strategy automatically discovers optimal subnetworks under strict parameter constraints (<100K). Contribution/Results: On CIFAR-10, the discovered model achieves 1.2% higher accuracy than ResNet-18, 2.3× faster inference, and 47% fewer parameters. It significantly balances accuracy, latency, and memory footprint, establishing a new paradigm for efficient vision understanding at the edge.

Technology Category

Application Category

📝 Abstract
Hybrids of Convolutional Neural Network (CNN) and Vision Transformer (ViT) have outperformed pure CNN or ViT architecture. However, since these architectures require large parameters and incur large computational costs, they are unsuitable for tinyML deployment. This paper introduces a new hybrid CNN-ViT search space for Neural Architecture Search (NAS) to find efficient hybrid architectures for image classification. The search space covers hybrid CNN and ViT blocks to learn local and global information, as well as the novel Pooling block of searchable pooling layers for efficient feature map reduction. Experimental results on the CIFAR10 dataset show that our proposed search space can produce hybrid CNN-ViT architectures with superior accuracy and inference speed to ResNet-based tinyML models under tight model size constraints.
Problem

Research questions and friction points this paper is trying to address.

Develops hybrid CNN-ViT NAS search space for tinyML
Optimizes architectures under tight model size constraints
Enhances accuracy and speed for image classification tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid CNN-ViT search space for NAS
Novel searchable pooling layers block
Efficient tinyML architectures under size constraints
🔎 Similar Papers
No similar papers found.