๐ค AI Summary
This work addresses a critical limitation in conventional deep learning-based precoding methods for multi-user MISO systems: their neglect of the invariance of the inner product between channel vectors and precoding vectors under global phase rotation, which leads to inefficient learning and poor generalization. To overcome this, the paper introduces complex projective space into deep precoding design for the first time, explicitly modeling and eliminating global phase redundancy through two parameterization strategiesโreal-valued embedding and complex hyperspherical coordinates. This enables the neural network to learn geometrically aligned and physically distinguishable mappings. Experimental results demonstrate that, with nearly unchanged model complexity, the proposed approach significantly improves both sum rate performance and generalization capability.
๐ Abstract
Deep-learning (DL)-based precoding in multi-user multiple-input single-output (MU-MISO) systems involves training DL models to map features derived from channel coefficients to labels derived from precoding weights. Traditionally, complex-valued channel and precoder coefficients are parameterized using either their real and imaginary components or their amplitude and phase. However, precoding performance depends on magnitudes of inner products between channel and precoding vectors, which are invariant to global phase rotations. Conventional representations fail to exploit this symmetry, leading to inefficient learning and degraded generalization. To address this, we propose a DL framework based on complex projective space (CPS) parameterizations of both the wireless channel and the weighted minimum mean squared error (WMMSE) precoder vectors. By removing the global phase redundancies inherent in conventional representations, the proposed framework enables the DL model to learn geometry-aligned and physically distinct channel-precoder mappings. Two CPS parameterizations based on real-valued embeddings and complex hyperspherical coordinates are investigated and benchmarked against two baseline methods. Simulation results demonstrate substantial improvements in sum-rate performance and generalization, with negligible increase in model complexity.