In-Memory Computing Enabled Deep MIMO Detection to Support Ultra-Low-Latency Communications

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional deep-unfolding MIMO detectors face prohibitive hardware latency, slow memristor reprogramming, and poor robustness against device-level statistical noise—hindering their deployment for 6G ultra-low-latency (≤0.1 ms) detection. Method: This work proposes a software–hardware co-designed in-memory computing (IMC) detection architecture. It decomposes the detection network into channel-dependent and channel-independent submodules to minimize IMC array reconfiguration overhead, and introduces a customized training methodology resilient to memristor non-idealities—including stochastic conductance noise and programming variability. Matrix–vector multiplication is accelerated to nanosecond scale via memristor-based IMC, enabling high accuracy and low hardware complexity in massive-MIMO scenarios. Contribution/Results: Experiments demonstrate a 2–3-order-of-magnitude reduction in end-to-end latency and up to 3 dB BER improvement over conventional software–hardware decoupled approaches, significantly advancing real-time signal processing for 6G systems.

Technology Category

Application Category

📝 Abstract
The development of sixth-generation (6G) mobile networks imposes unprecedented latency and reliability demands on multiple-input multiple-output (MIMO) communication systems, a key enabler of high-speed radio access. Recently, deep unfolding-based detectors, which map iterative algorithms onto neural network architectures, have emerged as a promising approach, combining the strengths of model-driven and data-driven methods to achieve high detection accuracy with relatively low complexity. However, algorithmic innovation alone is insufficient; software-hardware co-design is essential to meet the extreme latency requirements of 6G (i.e., 0.1 milliseconds). This motivates us to propose leveraging in-memory computing, which is an analog computing technology that integrates memory and computation within memristor circuits, to perform the intensive matrix-vector multiplication (MVM) operations inherent in deep MIMO detection at the nanosecond scale. Specifically, we introduce a novel architecture, called the deep in-memory MIMO (IM-MIMO) detector, characterized by two key features. First, each of its cascaded computational blocks is decomposed into channel-dependent and channel-independent neural network modules. Such a design minimizes the latency of memristor reprogramming in response to channel variations, which significantly exceeds computation time. Second, we develop a customized detector-training method that exploits prior knowledge of memristor-value statistics to enhance robustness against programming noise. Furthermore, we conduct a comprehensive analysis of the IM-MIMO detector's performance, evaluating detection accuracy, processing latency, and hardware complexity. Our study quantifies detection error as a function of various factors, including channel noise, memristor programming noise, and neural network size.
Problem

Research questions and friction points this paper is trying to address.

Addressing ultra-low-latency demands for 6G MIMO communication systems
Overcoming hardware limitations in deep learning-based MIMO detection
Mitigating memristor programming noise in analog in-memory computing implementations
Innovation

Methods, ideas, or system contributions that make the work stand out.

In-memory computing for ultra-low-latency MIMO detection
Decomposed channel-dependent and independent neural modules
Customized training using memristor statistics against noise
🔎 Similar Papers
No similar papers found.