🤖 AI Summary
This work addresses the high computational complexity in weight distribution analysis of pre-transformed polar codes, which arises from bit dependencies introduced by the pre-transformation matrix. To overcome this challenge, the authors propose a recursive computation method based on an extended information set and parity-check decomposition (PCD). By constructing an extended information set that eliminates correlations among information bits and leveraging equivalence class theory to select the minimal such set, the proposed approach significantly reduces computational complexity. The developed equivalence class framework offers a novel foundation for analyzing the weight distribution of pre-transformed polar codes. Numerical experiments demonstrate that the method achieves substantial gains in computational efficiency over existing deterministic algorithms while maintaining high accuracy.
📝 Abstract
This paper introduces an efficient algorithm based on the Parity-Consistent Decomposition (PCD) method to determine the WD of pre-transformed polar codes. First, to address the bit dependencies introduced by the pre-transformation matrix, we propose an iterative algorithm to construct an \emph{Expanded Information Set}. By expanding the information bits within this set into 0s and 1s, we eliminate the correlations among information bits, thereby enabling the recursive calculation of the Hamming weight distribution using the \emph{PCD method}. Second, to further reduce computational complexity, we establish the theory of equivalence classes for pre-transformed polar codes. Codes within the same equivalence class share an identical weight distribution but correspond to different \emph{Expanded Information Set} sizes. By selecting the pre-transformation matrix that minimizes the \emph{Expanded Information Set} size within an equivalence class, we optimize the computation process. Numerical results demonstrate that the proposed method significantly reduces computational complexity compared to existing deterministic algorithms.