Fine-tuning ORBGRAND with Very Few Channel Soft Values

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the performance degradation of ORBGRAND under finite-length codes caused by inaccurate pattern ordering, this paper proposes a low-complexity soft-information refinement method: starting from the predetermined error-pattern sequence generated by ORB, it incorporates only a minimal number of original channel log-likelihood ratios (LLRs) to fine-tune the testing order. The key contribution is a novel metric—“ordering well-orderedness”—grounded in asymptotic integer partition theory, enabling efficient evaluation and optimization of the ranking quality of critical error patterns. This metric requires no iterative computation or parameter training, substantially reducing reliance on soft information. Experimental results demonstrate that the proposed method achieves near-maximum-likelihood (ML) decoding performance with negligible computational overhead—particularly pronounced for medium-to-short code lengths—thereby closing the gap between practical GRAND variants and the ML bound.

Technology Category

Application Category

📝 Abstract
Guessing random additive noise decoding (GRAND) is a universal decoding paradigm that decodes by repeatedly testing error patterns until identifying a codeword, where the ordering of tests is generated by the received channel values. On one hand, while testing error patterns in a descending order of posterior probabilities leads to maximum likelihood decoding, its implementation complexity is prohibitive. On the other hand, testing error patterns with a prescribed set of error patterns permuted by the ranking among magnitudes of log-likelihood ratios (i.e., ordered reliability bits, ORB) enables efficient implementation, but results in performance loss for finite-length codes. Aiming at harnessing the strengths of these two approaches, this work proposes a fine-tuning method to improve ORBGRAND, adjusting the ordering of tests with the aid of very few exact channel soft values. This method is based on a metric for assessing the ``well-orderedness'' of error patterns. The metric is studied via the lens of the asymptotic theory of integer partitioning, which provides highly accurate estimation in numerical experiments. The metric then leads to an effective identification of fine-tuning to conduct, at the cost of a negligible increment of complexity. Numerical experiments demonstrate that the proposed fine-tuning method achieves a substantial performance enhancement compared with ORBGRAND.
Problem

Research questions and friction points this paper is trying to address.

Improving ORBGRAND decoding with minimal channel soft values
Reducing performance loss in finite-length codes via fine-tuning
Enhancing error pattern ordering with low-complexity adjustments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning ORBGRAND with few soft values
Metric for assessing error pattern order
Asymptotic theory guides fine-tuning efficiency
🔎 Similar Papers
Li Wan
Li Wan
Amazon AWS
Machine LearningNeural Networks
H
Huarui Yin
Department of Electronic Engineering and Information Science, University of Science and Technology of China
W
Wenyi Zhang
Department of Electronic Engineering and Information Science, University of Science and Technology of China