Computing Floating-Point Errors by Injecting Perturbations

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Floating-point errors pose significant risks in safety-critical systems, yet only a subset of inputs triggers substantial inaccuracies—necessitating efficient and precise detection methods. Existing approaches face critical limitations: high-precision reference methods suffer from implementation complexity and prohibitive computational overhead; ATOMU incurs false positives; while FPCC guarantees zero false positives, it exhibits poor efficiency. This paper proposes PI-detector—the first method to model input sensitivity based on the condition number of atomic floating-point operations. By injecting minimal perturbations into fundamental operations (e.g., addition and subtraction) and performing rigorous error propagation analysis, PI-detector automatically quantifies input sensitivity without resorting to expensive high-precision arithmetic. It achieves accuracy comparable to high-precision references while significantly outperforming FPCC in execution speed and eliminating false positives entirely. Experimental evaluation covers the ATOMU and HSED benchmarks, as well as linear system solvers.

Technology Category

Application Category

📝 Abstract
Floating-point programs form the foundation of modern science and engineering, providing the essential computational framework for a wide range of applications, such as safety-critical systems, aerospace engineering, and financial analysis. Floating-point errors can lead to severe consequences. Although floating-point errors widely exist, only a subset of inputs may trigger significant errors in floating-point programs. Therefore, it is crucial to determine whether a given input could produce such errors. Researchers tend to take the results of high-precision floating-point programs as oracles for detecting floating-point errors, which introduces two main limitations: (1) difficulty of implementation and (2) prolonged execution time. The two recent tools, ATOMU and FPCC, can partially address these issues. However, ATOMU suffers from false positives; while FPCC, though eliminating false positives, operates at a considerably slower speed. To address these two challenges, we propose a novel approach named PI-detector to computing floating-point errors effectively and efficiently. Our approach is based on the observation that floating-point errors stem from large condition numbers in atomic operations (such as addition and subtraction), which then propagate and accumulate. PI-detector injects small perturbations into the operands of individual atomic operations within the program and compares the outcomes of the original program with the perturbed version to compute floating-point errors. We evaluate PI-detector with datasets from ATOMU and HSED, as well as a complex linear system-solving program. Experimental results demonstrate that PI-detector can perform efficient and accurate floating-point error computation.
Problem

Research questions and friction points this paper is trying to address.

Detecting significant floating-point errors in programs
Overcoming limitations of high-precision oracles in error detection
Improving efficiency and accuracy in floating-point error computation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Injecting perturbations into atomic operations
Comparing original and perturbed program outcomes
Efficient and accurate error computation
🔎 Similar Papers
No similar papers found.