Fault-Free Analog Computing with Imperfect Hardware

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Memristor-based analog in-memory computing suffers from low accuracy and reliability due to device failures and parameter drift; existing redundancy or retraining strategies fail to simultaneously satisfy high precision, fixed-weight deployment, and data privacy requirements. This paper proposes a fault-immune paradigm for indirect matrix representation: the target matrix is decomposed into two tunable submatrices, mapped onto defective hardware, and mathematically reconstructed to bypass faulty cells—eliminating the need for hardware redundancy or model retraining. Integrating adaptive programming with optimization algorithms, our approach achieves >99.999% similarity to the ideal DFT matrix under 39% defect rate, reduces communication bit error rate by 56×, and improves computational density and energy efficiency by 196% and 179%, respectively. To the best of our knowledge, this is the first work to achieve ultra-high-precision analog computation under high defect rates, thereby overcoming yield limitations in memristive hardware.

Technology Category

Application Category

📝 Abstract
The growing demand for edge computing and AI drives research into analog in-memory computing using memristors, which overcome data movement bottlenecks by computing directly within memory. However, device failures and variations critically limit analog systems' precision and reliability. Existing fault-tolerance techniques, such as redundancy and retraining, are often inadequate for high-precision applications or scenarios requiring fixed matrices and privacy preservation. Here, we introduce and experimentally demonstrate a fault-free matrix representation where target matrices are decomposed into products of two adjustable sub-matrices programmed onto analog hardware. This indirect, adaptive representation enables mathematical optimization to bypass faulty devices and eliminate differential pairs, significantly enhancing computational density. Our memristor-based system achieved >99.999% cosine similarity for a Discrete Fourier Transform matrix despite 39% device fault rate, a fidelity unattainable with conventional direct representation, which fails with single device faults (0.01% rate). We demonstrated 56-fold bit-error-rate reduction in wireless communication and >196% density with 179% energy efficiency improvements compared to state-of-the-art techniques. This method, validated on memristors, applies broadly to emerging memories and non-electrical computing substrates, showing that device yield is no longer the primary bottleneck in analog computing hardware.
Problem

Research questions and friction points this paper is trying to address.

Overcoming device failures in analog computing systems
Enhancing precision and reliability in memristor-based computing
Eliminating need for redundancy in high-precision applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposing matrices into adjustable sub-matrices
Bypassing faulty devices via optimization
Achieving high fidelity with device faults
🔎 Similar Papers
No similar papers found.
Zhicheng Xu
Zhicheng Xu
Ph.D. student, The University of Hong Kong
in memory computingAI accelerator
J
Jiawei Liu
Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
Sitao Huang
Sitao Huang
Assistant Professor of EECS, University of California Irvine
Hardware AccelerationHigh-Level SynthesisFPGAParallel ComputingGPU
Zefan Li
Zefan Li
Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
S
Shengbo Wang
Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
B
Bo Wen
Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
Ruibin Mao
Ruibin Mao
The University of Hong Kong
In-memory circuit designneuromorphic computingAI edge devicesmemristors
Mingrui Jiang
Mingrui Jiang
PhD candidate, The University of Hong Kong
Giacomo Pedretti
Giacomo Pedretti
Research Scientist, Hewlett Packard Laboratories
AI acceleratorsIn-memory computingNeuromorphic ComputingAnalog computingEmerging memories
J
Jim Ignowski
Hewlett Packard Labs, Hewlett Packard Enterprise, Milpitas, CA, USA
Kaibin Huang
Kaibin Huang
Professor and Dept.Head, University of Hong Kong; NAI Fellow; IEEE Fellow; Highly Cited Researcher
Machine LearningMobile Edge ComputingWireless CommunicationsWireless Power Transfer
C
Can Li
Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China; Center for Advanced Semiconductor and Integrated Circuits, The University of Hong Kong, Hong Kong SAR, China