Unlocking Symbol-Level Precoding Efficiency Through Tensor Equivariant Neural Network

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational complexity of symbol-level precoding (SLP) arising from constructive interference exploitation, this paper proposes an end-to-end tensor-equivariant deep learning framework. We first uncover the tensor-equivariant structure inherent in the optimal closed-form solution of SLP; leveraging this insight, we design parameter-sharing equivariant network modules that jointly incorporate instantaneous channel state information and statistical channel knowledge to directly map to symbol-level auxiliary variables. An attention mechanism is further integrated to enhance representational capacity. The resulting method achieves linear time complexity—reducing inference latency by approximately 80× compared to conventional SLP solvers—while maintaining strong generalization across varying numbers of users, symbol block lengths, and imperfect channel conditions. Empirically, it attains performance closely approaching that of optimal SLP.

Technology Category

Application Category

📝 Abstract
Although symbol-level precoding (SLP) based on constructive interference (CI) exploitation offers performance gains, its high complexity remains a bottleneck. This paper addresses this challenge with an end-to-end deep learning (DL) framework with low inference complexity that leverages the structure of the optimal SLP solution in the closed-form and its inherent tensor equivariance (TE), where TE denotes that a permutation of the input induces the corresponding permutation of the output. Building upon the computationally efficient model-based formulations, as well as their known closed-form solutions, we analyze their relationship with linear precoding (LP) and investigate the corresponding optimality condition. We then construct a mapping from the problem formulation to the solution and prove its TE, based on which the designed networks reveal a specific parameter-sharing pattern that delivers low computational complexity and strong generalization. Leveraging these, we propose the backbone of the framework with an attention-based TE module, achieving linear computational complexity. Furthermore, we demonstrate that such a framework is also applicable to imperfect CSI scenarios, where we design a TE-based network to map the CSI, statistics, and symbols to auxiliary variables. Simulation results show that the proposed framework captures substantial performance gains of optimal SLP, while achieving an approximately 80-times speedup over conventional methods and maintaining strong generalization across user numbers and symbol block lengths.
Problem

Research questions and friction points this paper is trying to address.

Reducing high computational complexity in symbol-level precoding systems
Designing tensor equivariant neural networks for efficient precoding solutions
Maintaining performance gains under imperfect channel state information conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tensor equivariant neural network for precoding
Attention-based module with linear complexity
TE network handles imperfect CSI scenarios
🔎 Similar Papers
No similar papers found.
J
Jinshuo Zhang
National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096, China, and also with Purple Mountain Laboratories, Nanjing 211100, China
Y
Yafei Wang
National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096, China, and also with Purple Mountain Laboratories, Nanjing 211100, China
Xinping Yi
Xinping Yi
Southeast University
Information TheoryCommunicationsTrustworthy AIGraph Machine Learning
W
Wenjin Wang
National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096, China, and also with Purple Mountain Laboratories, Nanjing 211100, China
S
Shi Jin
National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096, China
Symeon Chatzinotas
Symeon Chatzinotas
Full Professor | IEEE Fellow | SIGCOM Head, SnT, University of Luxembourg
Wireless CommunicationsNon-Terrestrial NetworksInternet of Things6GQuantum Communications
Björn Ottersten
Björn Ottersten
Professor, SnT, University of Luxembourg, KTH Royal Institute of Technology, Stockholm, Sweden
Electrical EngineeringSignal ProcessingCommunicationsWireless Communications