🤖 AI Summary
To address the failure of conventional linear equalization in nonlinear MIMO channels with low-resolution quantization, this paper proposes the first soft-input soft-output (SISO) Turbo equalization framework based on in-context learning (ICL). The framework dynamically constructs prompts from pilots and decoder feedback to iteratively refine symbol posterior distributions. It pioneers the integration of ICL into communication equalization, introducing a prompt-enhancement mechanism that incorporates decoder extrinsic information as contextual input and supports dual architecture deployment—Transformer for high-accuracy scenarios and state-space models (SSM) for resource-constrained settings. Experiments demonstrate that the proposed method significantly outperforms conventional baselines assuming perfect channel state information (CSI) under low-precision quantization. The Transformer variant exhibits strong few-shot generalization, while the SSM variant achieves a 3.2× inference speedup. This work establishes a novel paradigm for data-efficient, context-aware equalization in quantized MIMO systems.
📝 Abstract
This paper introduces a novel in-context learning (ICL) framework, inspired by large language models (LLMs), for soft-input soft-output channel equalization in coded multiple-input multiple-output (MIMO) systems. The proposed approach learns to infer posterior symbol distributions directly from a prompt of pilot signals and decoder feedback. A key innovation is the use of prompt augmentation to incorporate extrinsic information from the decoder output as additional context, enabling the ICL model to refine its symbol estimates iteratively across turbo decoding iterations. Two model variants, based on Transformer and state-space architectures, are developed and evaluated. Extensive simulations demonstrate that, when traditional linear assumptions break down, e.g., in the presence of low-resolution quantization, ICL equalizers consistently outperform conventional model-based baselines, even when the latter are provided with perfect channel state information. Results also highlight the advantage of Transformer-based models under limited training diversity, as well as the efficiency of state-space models in resource-constrained scenarios.