🤖 AI Summary
This work addresses the high memory and power overhead of conventional neural receivers in multi-code-rate scenarios, which typically require storing multiple full sets of weights, hindering practical deployment. To overcome this limitation, the authors propose a convolutional neural network architecture based on Low-Rank Adaptation (LoRA), wherein a shared backbone network is frozen, and only lightweight, code-rate-specific adaptation modules are trained. The design is optimized through end-to-end training under the 3GPP Clustered Delay Line (CDL) channel model and implemented in a 22 nm CMOS process. Experimental results demonstrate that, while supporting three distinct code rates, the proposed approach matches or exceeds the performance of fully retrained networks, yet achieves over 65% reduction in silicon area and 15% lower power consumption.
📝 Abstract
Neural network based receivers have recently demonstrated superior system-level performance compared to traditional receivers. However, their practicality is limited by high memory and power requirements, as separate weight sets must be stored for each code rate. To address this challenge, we propose LOREN, a Low Rank-Based Code-Rate Adaptation Neural Receiver that achieves adaptability with minimal overhead. LOREN integrates lightweight low rank adaptation adapters (LOREN adapters) into convolutional layers, freezing a shared base network while training only small adapters per code rate. An end-to-end training framework over 3GPP CDL channels ensures robustness across realistic wireless environments. LOREN achieves comparable or superior performance relative to fully retrained base neural receivers. The hardware implementation of LOREN in 22nm technology shows more than 65% savings in silicon area and up to 15% power reduction when supporting three code rates.