MotiMem: Motion-Aware Approximate Memory for Energy-Efficient Neural Perception in Autonomous Vehicles

πŸ“… 2026-03-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the severe memory wall problem in autonomous driving caused by high-resolution sensors, where data movement energy far exceeds computation energy, and conventional image compression methods fail to effectively reduce bus dynamic power due to their semantic unawareness. To tackle this, the paper proposes MotiMemβ€”a hardware-software co-designed approximate memory interface that uniquely integrates inter-frame motion consistency with bit-level sparse coding. By leveraging lightweight 2D motion propagation, dynamic region-of-interest identification, and an adaptive inverted truncation-based hybrid sparse coding strategy, MotiMem optimizes the energy-accuracy trade-off for neural perception tasks. Experiments across nuScenes, Waymo, and KITTI datasets on 16 detection models show that MotiMem reduces dynamic memory interface energy by 43% on average while retaining 93% of object detection accuracy, significantly outperforming standard codecs such as JPEG and WebP.
πŸ“ Abstract
High-resolution sensors are critical for robust autonomous perception but impose a severe memory wall on battery-constrained electric vehicles. In these systems, data movement energy often outweighs computation. Traditional image compression is ill-suited as it is semantically blind and optimizes for storage rather than bus switching activity. We propose MotiMem, a hardware-software co-designed interface. Exploiting temporal coherence,MotiMem uses lightweight 2D Motion Propagation to dynamically identify Regions of Interest (RoI). Complementing this, a Hybrid Sparsity-Aware Coding scheme leverages adaptive inversion and truncation to induce bitlevel sparsity. Extensive experiments across nuScenes, Waymo, and KITTI with 16 detection models demonstrate that MotiMem reduces memory-interface dynamic energy by approximately 43 percent while retaining approximately 93 percent of the object detection accuracy, establishing a new Pareto frontier significantly superior to standard codecs like JPEG and WebP.
Problem

Research questions and friction points this paper is trying to address.

memory wall
energy efficiency
autonomous vehicles
data movement energy
neural perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

Motion-Aware Memory
Hybrid Sparsity-Aware Coding
Regions of Interest (RoI)
Energy-Efficient Neural Perception
Hardware-Software Co-Design
πŸ”Ž Similar Papers
No similar papers found.
H
Haohua Que
University of Georgia
M
Mingkai Liu
Peking University
J
Jiayue Xie
Tsinghua University
H
Haojia Gao
Tsinghua University
J
Jiajun Sun
Tsinghua University
H
Hongyi Xu
Tsinghua University; Infinity Robotics
Handong Yao
Handong Yao
University of Georgia
traffic flowconnected autonomous vehicle
F
Fei Qiao
Tsinghua University