๐ค AI Summary
Existing complex-valued deep learning models formulate self-attention as real-valued correlations, neglecting phase interference effects and thereby decoupling amplitude and phase information. This work proposes the Holographic Transformerโthe first self-attention architecture grounded in electromagnetic wave interference principles: it modulates key-value interactions via relative phase shifts and aggregates value vectors through coherent superposition, rigorously preserving complex-valued consistency; a dual-head complex decoder is further introduced to prevent phase collapse. By integrating physical priors with deep learning, the model achieves significant improvements on Polarimetric Synthetic Aperture Radar (PolSAR) image classification (+3.2% F1 score) and wireless channel prediction (โ18.7% regression error), while demonstrating strong robustness to phase perturbations. The framework establishes a novel, interpretable, and high-fidelity paradigm for complex-signal modeling.
๐ Abstract
Complex-valued signals encode both amplitude and phase, yet most deep models treat attention as real-valued correlation, overlooking interference effects. We introduce the Holographic Transformer, a physics-inspired architecture that incorporates wave interference principles into self-attention. Holographic attention modulates interactions by relative phase and coherently superimposes values, ensuring consistency between amplitude and phase. A dual-headed decoder simultaneously reconstructs the input and predicts task outputs, preventing phase collapse when losses prioritize magnitude over phase. We demonstrate that holographic attention implements a discrete interference operator and maintains phase consistency under linear mixing. Experiments on PolSAR image classification and wireless channel prediction show strong performance, achieving high classification accuracy and F1 scores, low regression error, and increased robustness to phase perturbations. These results highlight that enforcing physical consistency in attention leads to generalizable improvements in complex-valued learning and provides a unified, physics-based framework for coherent signal modeling. The code is available at https://github.com/EonHao/Holographic-Transformers.