Detecting Flaky Tests in Quantum Software: A Dynamic Approach

📅 2025-12-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Flaky tests—non-deterministic in pass/fail outcomes—pose a critical reliability threat to quantum software, yet their prevalence, characteristics, and detectability lack empirical grounding. To address this gap, we conduct the first large-scale dynamic empirical study, executing 27,026 test cases across 23 versions of Qiskit Terra for 10,000 repetitions each, identifying 290 flaky tests. We propose a Wilson confidence interval–based method to quantify rerun budgets for flakiness detection, revealing extremely low occurrence probabilities (~10⁻⁴), high sporadicity, and severe distribution skew. Our work introduces the first cross-version volatility tracking and subsystem mapping analysis for quantum flaky tests, and releases the first publicly available quantum flaky test dataset. Results show that although flakiness rates are low (0–0.4%), detecting them requires tens of thousands of executions—highlighting a severe challenge to quantum testing stability.

Technology Category

Application Category

📝 Abstract
Flaky tests, tests that pass or fail nondeterministically without changes to code or environment, pose a serious threat to software reliability. While classical software engineering has developed a rich body of dynamic and static techniques to study flakiness, corresponding evidence for quantum software remains limited. Prior work relies primarily on static analysis or small sets of manually reported incidents, leaving open questions about the prevalence, characteristics, and detectability of flaky tests. This paper presents the first large-scale dynamic characterization of flaky tests in quantum software. We executed the Qiskit Terra test suite 10,000 times across 23 releases in controlled environments. For each release, we measured test-outcome variability, identified flaky tests, estimated empirical failure probabilities, analyzed recurrence across versions, and used Wilson confidence intervals to quantify rerun budgets for reliable detection. We further mapped flaky tests to Terra subcomponents to assess component-level susceptibility. Across 27,026 test cases, we identified 290 distinct flaky tests. Although overall flakiness rates were low (0-0.4%), flakiness was highly episodic: nearly two-thirds of flaky tests appeared in only one release, while a small subset recurred intermittently or persistently. Many flaky tests failed with very small empirical probabilities ($hat{p} approx 10^{-4}$), implying that tens of thousands of executions may be required for confident detection. Flakiness was unevenly distributed across subcomponents, with 'transpiler' and 'quantum_info' accounting for the largest share. These results show that quantum test flakiness is rare but difficult to detect under typical continuous integration budgets. To support future research, we release a public dataset of per-test execution outcomes.
Problem

Research questions and friction points this paper is trying to address.

Detects flaky tests in quantum software using dynamic analysis
Characterizes prevalence and patterns of flakiness across Qiskit releases
Estimates execution budgets needed for reliable flaky test detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale dynamic characterization of quantum flaky tests
Repeated test executions with empirical probability analysis
Wilson confidence intervals for reliable flakiness detection
🔎 Similar Papers
No similar papers found.
Dongchan Kim
Dongchan Kim
MSci student, University of Maryland, Baltimore county
H
Hamidreza Khoramrokh
Toronto Metropolitan University, Toronto, ON, Canada
L
Lei Zhang
University of Maryland, Baltimore County, Baltimore, MD, USA
Andriy Miranskyy
Andriy Miranskyy
Toronto Metropolitan University (formerly Ryerson University)
large-scale software systemsquantum software engineering