π€ AI Summary
This study investigates whether quantum language models genuinely leverage quantum entanglement for memory and reasoning or merely simulate classical computation. Through causal gate ablation, entanglement quantification, density matrix interventions, and comparisons between classical and quantum architectures, the work presents the first mechanistic interpretability analysis of quantum models on long-range dependency tasks. The findings reveal that single-qubit models are functionally equivalent to classical strategies, whereas two-qubit models effectively encode contextual information via entanglement (p < 0.0001, Cohenβs d = 0.89). However, on real quantum hardware, this advantage degrades to random performance due to noise. This work establishes entanglement as a critical carrier of contextual memory in quantum models and highlights the pivotal role of noise in undermining the robustness of quantum strategies.
π Abstract
Quantum language models have shown competitive performance on sequential tasks, yet whether trained quantum circuits exploit genuinely quantum resources -- or merely embed classical computation in quantum hardware -- remains unknown. Prior work has evaluated these models through endpoint metrics alone, without examining the memory strategies they actually learn internally. We introduce the first mechanistic interpretability study of quantum language models, combining causal gate ablation, entanglement tracking, and density-matrix interchange interventions on a controlled long-range dependency task. We find that single-qubit models are exactly classically simulable and converge to the same geometric strategy as matched classical baselines, while two-qubit models with entangling gates learn a representationally distinct strategy that encodes context in inter-qubit entanglement -- confirmed by three independent causal tests (p < 0.0001, d = 0.89). On real quantum hardware, only the classical geometric strategy survives device noise; the entanglement strategy degrades to chance. These findings open mechanistic interpretability as a tool for the science of quantum language models and reveal a noise-expressivity tradeoff governing which learned strategies survive deployment.