🤖 AI Summary
This study systematically compares quantum long short-term memory (QLSTM) and quantum forgetful wave packet (QFWP) models for daily EUR/USD exchange rate forecasting under equivalent parameter count (EPC) and adjoint-based differentiation.
Method: We introduce the first EPC-aligned, numerically reproducible quantum time-series modeling benchmark, incorporating batched tensor parallelism and nonparametric statistical testing (Wilcoxon signed-rank test and Cliff’s delta).
Contribution/Results: QFWP consistently outperforms QLSTM across all batch sizes in RMSE and directional accuracy (p ≤ 0.004). QLSTM achieves peak throughput at batch size 64. Forward computation accelerates by 2.2–2.4×; end-to-end training acceleration reaches up to 2×. We uncover asymmetric scalability between forward and backward passes in quantum RNNs and characterize the speed–accuracy Pareto frontier. This work establishes a reproducible benchmark and provides principled model selection guidance for one-dimensional quantum time-series modeling.
📝 Abstract
We compare two quantum sequence models, QLSTM and QFWP, under an Equal Parameter Count (EPC) and adjoint differentiation setup on daily EUR USD forecasting as a controlled one dimensional time series case study. Across 10 random seeds and batch sizes from 4 to 64, we measure component wise runtimes including train forward, backward, full train, and inference, as well as accuracy including RMSE and directional accuracy. Batched forward scales well by about 2.2 to 2.4 times, but backward scales modestly, with QLSTM about 1.01 to 1.05 times and QFWP about 1.18 to 1.22 times, which caps end to end training speedups near 2 times. QFWP achieves lower RMSE and higher directional accuracy at all batch sizes, supported by a Wilcoxon test with p less than or equal to 0.004 and a large Cliff delta, while QLSTM reaches the highest throughput at batch size 64, revealing a clear speed accuracy Pareto frontier. We provide an EPC aligned, numerically checked benchmarking pipeline and practical guidance on batch size choices, while broader datasets and hardware and noise settings are left for future work.