🤖 AI Summary
To address hardware constraints (limited qubits and shallow circuit depth), training instability, and high computational overhead of classical attention mechanisms, this paper proposes CLAQS—a compact, fully quantum token mixer designed for long-sequence text classification. Methodologically, CLAQS jointly learns complex-valued weights and nonlinear transformations within a unified quantum circuit; employs L1 amplitude normalization to enhance training stability; adopts a two-stage parametrized architecture that decouples shared embedding from local feed-forward modules; and integrates sliding-window processing with document-level aggregation for efficient long-range modeling. Evaluated on SST-2 and IMDB, CLAQS achieves 91.64% and 87.08% accuracy using only eight data qubits and shallow-depth circuits—substantially outperforming classical Transformers and state-of-the-art quantum-classical hybrid models. These results demonstrate the feasibility and advantages of fully quantum natural language processing.
📝 Abstract
Quantum compute is scaling fast, from cloud QPUs to high throughput GPU simulators, making it timely to prototype quantum NLP beyond toy tasks. However, devices remain qubit limited and depth limited, training can be unstable, and classical attention is compute and memory heavy. This motivates compact, phase aware quantum token mixers that stabilize amplitudes and scale to long sequences. We present CLAQS, a compact, fully quantum token mixer for text classification that jointly learns complex-valued mixing and nonlinear transformations within a unified quantum circuit. To enable stable end-to-end optimization, we apply l1 normalization to regulate amplitude scaling and introduce a two-stage parameterized quantum architecture that decouples shared token embeddings from a window-level quantum feed-forward module. Operating under a sliding-window regime with document-level aggregation, CLAQS requires only eight data qubits and shallow circuits, yet achieves 91.64% accuracy on SST-2 and 87.08% on IMDB, outperforming both classical Transformer baselines and strong hybrid quantum-classical counterparts.