CIM-NET: A Video Denoising Deep Neural Network Model Optimized for Computing-in-Memory Architectures

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low inference efficiency and poor energy efficiency of real-time video denoising on edge devices deployed on Compute-in-Memory (CIM) architectures, this paper proposes a hardware-algorithm co-design framework. We introduce CIM-NET, the first CIM-aware deep neural network, and CIM-CONV, a novel pseudo-convolutional operator that jointly leverages sliding-window decomposition and fully connected transformations to exploit the massive parallelism of matrix-vector multiplication (MVM) in CIM crossbar arrays. Compared to FastDVDnet at stride=8, our method reduces MVM operations by 98.7% (to 1/77), with only a marginal PSNR degradation of 0.45 dB (achieving 35.11 dB), while significantly improving inference speed and energy efficiency. This work is the first to systematically resolve the architectural mismatch between DNN models and CIM hardware constraints, delivering a lightweight, efficient, and deployable solution for CIM-accelerated edge video processing.

Technology Category

Application Category

📝 Abstract
While deep neural network (DNN)-based video denoising has demonstrated significant performance, deploying state-of-the-art models on edge devices remains challenging due to stringent real-time and energy efficiency requirements. Computing-in-Memory (CIM) chips offer a promising solution by integrating computation within memory cells, enabling rapid matrix-vector multiplication (MVM). However, existing DNN models are often designed without considering CIM architectural constraints, thus limiting their acceleration potential during inference. To address this, we propose a hardware-algorithm co-design framework incorporating two innovations: (1) a CIM-Aware Architecture, CIM-NET, optimized for large receptive field operation and CIM's crossbar-based MVM acceleration; and (2) a pseudo-convolutional operator, CIM-CONV, used within CIM-NET to integrate slide-based processing with fully connected transformations for high-quality feature extraction and reconstruction. This framework significantly reduces the number of MVM operations, improving inference speed on CIM chips while maintaining competitive performance. Experimental results indicate that, compared to the conventional lightweight model FastDVDnet, CIM-NET substantially reduces MVM operations with a slight decrease in denoising performance. With a stride value of 8, CIM-NET reduces MVM operations to 1/77th of the original, while maintaining competitive PSNR (35.11 dB vs. 35.56 dB
Problem

Research questions and friction points this paper is trying to address.

Optimizing video denoising DNNs for CIM architectures
Reducing MVM operations in CIM-based inference
Balancing denoising performance with energy efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

CIM-Aware Architecture optimized for large receptive field
Pseudo-convolutional operator CIM-CONV for feature extraction
Reduces MVM operations significantly on CIM chips
🔎 Similar Papers
No similar papers found.
S
Shan Gao
ZGC Institute of Ubiquitous-X Innovation and Applications, China Mobile Research Institute, Beijing, China
Zhiqiang Wu
Zhiqiang Wu
Brage Golding Distinguished Professor of Electrical Engineering, Wright State University
wireless Communicationssignal processingcognitive radioartificial intelligenceelectronic warfare
Y
Yawen Niu
China Mobile Research Institute, Beijing, China
X
Xiaotao Li
China Mobile Research Institute, Beijing, China
Q
Qingqing Xu
China Mobile Research Institute, Beijing, China