🤖 AI Summary
One-bit ADCs in massive MIMO systems induce non-convex, discrete precoding optimization, hindering theoretical analysis and practical design.
Method: We propose an asymptotic analysis framework based on Approximate Message Passing (AMP), integrating convex relaxation, quantization-aware modeling, and ℓ∞² regularization to design symbol-level one-bit precoders under the minimum mean square error (MMSE) criterion.
Contributions/Results: First, we derive a closed-form asymptotic expression for the symbol error probability (SEP) at the receiver. Second, we rigorously prove the optimality of ℓ∞² regularization within a broad class of mixed regularizers. Third, experiments demonstrate substantial performance gains over conventional ℓ₁- and ℓ₂-based schemes in high-dimensional regimes. This work establishes the first analytically rigorous and performance-guaranteed theoretical foundation for one-bit precoding in massive MIMO.
📝 Abstract
Massive multiple-input multiple-output (MIMO) systems employing one-bit digital-to-analog converters offer a hardware-efficient solution for wireless communications. However, the one-bit constraint poses significant challenges for precoding design, as it transforms the problem into a discrete and nonconvex optimization task. In this paper, we investigate a widely adopted ``convex-relaxation-then-quantization" approach for nonlinear symbol-level one-bit precoding. Specifically, we first solve a convex relaxation of the discrete minimum mean square error precoding problem, and then quantize the solution to satisfy the one-bit constraint. To analyze the high-dimensional asymptotic performance of this scheme, we develop a novel analytical framework based on approximate message passing (AMP). This framework enables us to derive a closed-form expression for the symbol error probability (SEP) at the receiver side in the large-system limit, which provides a quantitative characterization of how model and system parameters affect the SEP performance. Our empirical results suggest that the $ell_infty^2$ regularizer, when paired with an optimally chosen regularization parameter, achieves optimal SEP performance within a broad class of convex regularization functions. As a first step towards a theoretical justification, we prove the optimality of the $ell_infty^2$ regularizer within the mixed $ell_infty^2$-$ell_2^2$ regularization functions.