Batched Training for QLSTM vs. QFWP: A System-Oriented Approach to EPC-Aware RMSE-DA

📅 2025-12-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically compares quantum long short-term memory (QLSTM) and quantum forgetful wave packet (QFWP) models for daily EUR/USD exchange rate forecasting under equivalent parameter count (EPC) and adjoint-based differentiation. Method: We introduce the first EPC-aligned, numerically reproducible quantum time-series modeling benchmark, incorporating batched tensor parallelism and nonparametric statistical testing (Wilcoxon signed-rank test and Cliff’s delta). Contribution/Results: QFWP consistently outperforms QLSTM across all batch sizes in RMSE and directional accuracy (p ≤ 0.004). QLSTM achieves peak throughput at batch size 64. Forward computation accelerates by 2.2–2.4×; end-to-end training acceleration reaches up to 2×. We uncover asymmetric scalability between forward and backward passes in quantum RNNs and characterize the speed–accuracy Pareto frontier. This work establishes a reproducible benchmark and provides principled model selection guidance for one-dimensional quantum time-series modeling.

Technology Category

Application Category

📝 Abstract
We compare two quantum sequence models, QLSTM and QFWP, under an Equal Parameter Count (EPC) and adjoint differentiation setup on daily EUR USD forecasting as a controlled one dimensional time series case study. Across 10 random seeds and batch sizes from 4 to 64, we measure component wise runtimes including train forward, backward, full train, and inference, as well as accuracy including RMSE and directional accuracy. Batched forward scales well by about 2.2 to 2.4 times, but backward scales modestly, with QLSTM about 1.01 to 1.05 times and QFWP about 1.18 to 1.22 times, which caps end to end training speedups near 2 times. QFWP achieves lower RMSE and higher directional accuracy at all batch sizes, supported by a Wilcoxon test with p less than or equal to 0.004 and a large Cliff delta, while QLSTM reaches the highest throughput at batch size 64, revealing a clear speed accuracy Pareto frontier. We provide an EPC aligned, numerically checked benchmarking pipeline and practical guidance on batch size choices, while broader datasets and hardware and noise settings are left for future work.
Problem

Research questions and friction points this paper is trying to address.

Comparing QLSTM and QFWP quantum sequence models for time series forecasting
Evaluating runtime and accuracy trade-offs under equal parameter count constraints
Providing batch size selection guidance for quantum model training efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

EPC-aligned benchmarking pipeline for quantum models
Batched training scaling analysis for QLSTM and QFWP
Speed-accuracy Pareto frontier evaluation in quantum forecasting
🔎 Similar Papers
No similar papers found.
Jun-Hao Chen
Jun-Hao Chen
National Taiwan University
FintechExplainable AIDeep Learning
M
Ming-Kai Hung
Department of Technology Application and Human Resource Development, National Taiwan Normal University, Taipei, Taiwan
Y
Yun-Cheng Tsai
Department of Technology Application and Human Resource Development, National Taiwan Normal University, Taipei, Taiwan
Samuel Yen-Chi Chen
Samuel Yen-Chi Chen
Wells Fargo
quantum computationquantum informationmachine learningquantum machine learning