Machine Unlearning and Continual Learning in Hybrid Resistive Memory Neuromorphic Systems

๐Ÿ“… 2026-01-15
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of balancing privacy-preserving data forgetting and continual learning in resistive random-access memory (RRAM)-based neuromorphic systems, which is hindered by the absence of efficient forgetting mechanisms and the prohibitively high programming overhead of existing approaches. To overcome this, the authors propose a hardware-software co-design that introduces Low-Rank Adaptation (LoRA) into RRAM systems for the first time. By decoupling static base weights from dynamic low-rank updates and integrating a hybrid analogโ€“digital in-memory computing architecture with an SRAM buffer, the design drastically reduces hardware programming demands. Implemented in 180 nm CMOS technology, the system achieves 147.76ร—, 387.95ร—, and 48.44ร— reductions in training cost, deployment overhead, and inference energy, respectively, across face recognition, speaker verification, and image generation tasks.

Technology Category

Application Category

๐Ÿ“ Abstract
Resistive memory (RM) based neuromorphic systems can emulate synaptic plasticity and thus support continual learning, but they generally lack biologically inspired mechanisms for active forgetting, which are critical for meeting modern data privacy requirements. Algorithmic forgetting, or machine unlearning, seeks to remove the influence of specific data from trained models to prevent memorization of sensitive information and the generation of harmful content, yet existing exact and approximate unlearning schemes incur prohibitive programming overheads on RM hardware owing to device variability and iterative write-verify cycles. Analogue implementations of continual learning face similar barriers. Here we present a hardware-software co-design that enables an efficient training, deployment and inference pipeline for machine unlearning and continual learning on RM accelerators. At the software level, we introduce a low-rank adaptation (LoRA) framework that confines updates to compact parameter branches, substantially reducing the number of trainable parameters and therefore the training cost. At the hardware level, we develop a hybrid analogue-digital compute-in-memory system in which well-trained weights are stored in analogue RM arrays, whereas dynamic LoRA updates are implemented in a digital computing unit with SRAM buffer. This hybrid architecture avoids costly reprogramming of analogue weights and maintains high energy efficiency during inference. Fabricated in a 180 nm CMOS process, the prototype achieves up to a 147.76-fold reduction in training cost, a 387.95-fold reduction in deployment overhead and a 48.44-fold reduction in inference energy across privacy-sensitive tasks including face recognition, speaker authentication and stylized image generation, paving the way for secure and efficient neuromorphic intelligence at the edge.
Problem

Research questions and friction points this paper is trying to address.

machine unlearning
continual learning
resistive memory
data privacy
hardware overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

machine unlearning
continual learning
resistive memory
LoRA
compute-in-memory
๐Ÿ”Ž Similar Papers
Ning Lin
Ning Lin
Princeton University
HurricanesStorm SurgeClimate AdaptationCoastal ResilienceRisk Analysis
J
Jichang Yang
Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
Y
Yangu He
Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
Z
Zijian Ye
Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
K
Kwun Hang Wong
Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
X
Xinyuan Zhang
Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
S
Songqi Wang
Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
Yi Li
Yi Li
The Hong Kong University of Science and Technology
MLLMCV
K
Kemi Xu
MIIT Key Laboratory of Complex-field Intelligent Sensing, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
L
Leo Yu Zhang
The School of Information and Communication Technology, Griffith University, QLD 4215, Australia
Xiaoming Chen
Xiaoming Chen
Full Professor@Institute of Computing Technology, Chinese Academy of Sciences
edacomputer architecture
D
Dashan Shang
Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China
Han Wang
Han Wang
University of Hong Kong
NanoelectronicsNanophotonicsElectronic MaterialsSemiconductor Physics and Technology
Xiaojuan Qi
Xiaojuan Qi
Assistant Professor, The University of Hong Kong
3D VisionDeep learningArtificial IntelligenceMedical Image Analysis
Zhongrui Wang
Zhongrui Wang
Southern University of Science and Technology
MemristorIn-memory ComputingAI accelerator