π€ AI Summary
This work addresses the severe memory wall problem in autonomous driving caused by high-resolution sensors, where data movement energy far exceeds computation energy, and conventional image compression methods fail to effectively reduce bus dynamic power due to their semantic unawareness. To tackle this, the paper proposes MotiMemβa hardware-software co-designed approximate memory interface that uniquely integrates inter-frame motion consistency with bit-level sparse coding. By leveraging lightweight 2D motion propagation, dynamic region-of-interest identification, and an adaptive inverted truncation-based hybrid sparse coding strategy, MotiMem optimizes the energy-accuracy trade-off for neural perception tasks. Experiments across nuScenes, Waymo, and KITTI datasets on 16 detection models show that MotiMem reduces dynamic memory interface energy by 43% on average while retaining 93% of object detection accuracy, significantly outperforming standard codecs such as JPEG and WebP.
π Abstract
High-resolution sensors are critical for robust autonomous perception but impose a severe memory wall on battery-constrained electric vehicles. In these systems, data movement energy often outweighs computation. Traditional image compression is ill-suited as it is semantically blind and optimizes for storage rather than bus switching activity. We propose MotiMem, a hardware-software co-designed interface. Exploiting temporal coherence,MotiMem uses lightweight 2D Motion Propagation to dynamically identify Regions of Interest (RoI). Complementing this, a Hybrid Sparsity-Aware Coding scheme leverages adaptive inversion and truncation to induce bitlevel sparsity. Extensive experiments across nuScenes, Waymo, and KITTI with 16 detection models demonstrate that MotiMem reduces memory-interface dynamic energy by approximately 43 percent while retaining approximately 93 percent of the object detection accuracy, establishing a new Pareto frontier significantly superior to standard codecs like JPEG and WebP.