🤖 AI Summary
Floating-point accumulation networks (FPANs) are critical components in compensated summation and double-precision floating-point algorithms; however, their error analysis is notoriously complex, and existing approaches often lack rigorous formal verification—some published error bounds have even been refuted.
Method: We propose the first fully automated, machine-verifiable framework for FPAN correctness verification. It introduces a novel floating-point abstraction model that precisely captures leading/trailing zero/one patterns in sign, exponent, and significand; integrates SMT solving with formal verification to automatically synthesize error bounds tight to 1 bit; and designs a new double-precision-addition FPAN.
Contribution/Results: Our framework guarantees formal correctness while achieving both higher accuracy and faster execution than state-of-the-art baselines. Experiments successfully generate machine-checked proofs for multiple classic FPANs, with error bounds provably tight at the bit level.
📝 Abstract
Floating-point accumulation networks (FPANs) are key building blocks used in many floating-point algorithms, including compensated summation and double-double arithmetic. FPANs are notoriously difficult to analyze, and algorithms using FPANs are often published without rigorous correctness proofs. In fact, on at least one occasion, a published error bound for a widely used FPAN was later found to be incorrect. In this paper, we present an automatic procedure that produces computer-verified proofs of several FPAN correctness properties, including error bounds that are tight to the nearest bit. Our approach is underpinned by a novel floating-point abstraction that models the sign, exponent, and number of leading and trailing zeros and ones in the mantissa of each number flowing through an FPAN. We also present a new FPAN for double-double addition that is faster and more accurate than the previous best known algorithm.