🤖 AI Summary
This work addresses the challenge of real-time cloud-based image inference for ultra-resource-constrained IoT devices operating over LPWANs—characterized by ultra-low bandwidth, high packet loss rates, and extremely low duty cycles. We propose the first lightweight, content-aware progressive coding framework: a deep learning–based progressive encoder dynamically prioritizes transmission of semantically critical bits; a content-sensitive bit allocation mechanism and an ultra-low-overhead deployment strategy for Cortex-M7 microcontrollers enable cloud inference to commence as soon as partial data arrives. Evaluated on ImageNet-1000, CIFAR-100, and COCO, our method achieves average accuracy gains of 14.01%, 18.01%, and 0.1 mAP@0.5, respectively, while reducing bandwidth consumption by 61.24%, 83.68%, and 42.25%. Encoding overhead increases only 4% over JPEG—significantly overcoming the fundamental limitation of conventional non-progressive codecs, which fail to decode meaningfully under partial reception.
📝 Abstract
IoT devices have limited hardware capabilities and are often deployed in remote areas. Consequently, advanced vision models surpass such devices' processing and storage capabilities, requiring offloading of such tasks to the cloud. However, remote areas often rely on LPWANs technology with limited bandwidth, high packet loss rates, and extremely low duty cycles, which makes fast offloading for time-sensitive inference challenging. Today's approaches, which are deployable on weak devices, generate a non-progressive bit stream, and therefore, their decoding quality suffers strongly when data is only partially available on the cloud at a deadline due to limited bandwidth or packet losses. In this paper, we introduce LimitNet, a progressive, content-aware image compression model designed for extremely weak devices and networks. LimitNet's lightweight progressive encoder prioritizes critical data during transmission based on the content of the image, which gives the cloud the opportunity to run inference even with partial data availability. Experimental results demonstrate that LimitNet, on average, compared to SOTA, achieves 14.01 p.p. (percentage point) higher accuracy on ImageNet1000, 18.01 pp on CIFAR100, and 0.1 higher mAP@0.5 on COCO. Also, on average, LimitNet saves 61.24% bandwidth on ImageNet1000, 83.68% on CIFAR100, and 42.25% on the COCO dataset compared to SOTA, while it only has 4% more encoding time compared to JPEG (with a fixed quality) on STM32F7 (Cortex-M7).