🤖 AI Summary
Floating-point programs suffer from accumulated rounding errors, making invariant verification challenging. To address this, we propose a compact floating-point invariant generation framework. Methodologically, we innovatively integrate FPTaylor’s first-order differential error analysis with constraint solving, enabling support for conditional branches. We design two polynomial invariant generation algorithms: one handles general floating-point operations but requires an initial invariant; the other is initialization-free but restricted to polynomial programs. By combining symbolic execution, faithful floating-point arithmetic modeling, and polynomial synthesis, our approach precisely captures program behavior under error perturbations. Experimental evaluation on multiple benchmarks demonstrates significant improvements over state-of-the-art methods in both efficiency and precision of invariant generation—particularly achieving breakthroughs in tightness of error bounds and computational scalability.
📝 Abstract
In numeric-intensive computations, it is well known that the execution of floating-point programs is imprecise as floating point arithmetics (e.g., addition, subtraction, multiplication, division, etc.) incurs rounding errors. Albeit the rounding error is small for every single floating-point operation, the aggregation of such error in multiple operations may be dramatic and cause catastrophic program failures. Therefore, to ensure the correctness of floating-point programs, the effect of floating point error needs to be carefully taken into account. In this work, we consider the invariant generation for floating point programs, whose aim is to generate tight invariants under the perturbation of floating point errors. Our main contribution is a theoretical framework on how to apply constraint solving methods to address the invariant generation problem. In our framework, we propose a novel combination between the first-order differential characterization by FPTaylor (TOPLAS 2018) and constraint solving methods, aiming to reduce the computational burden of constraint solving. Moreover, we devise two polynomial invariant generation algorithms to instantiate the framework. The first algorithm is applicable to a wide range of floating-point operations but requires an initial (coarse) invariant as external input, while the second does not require an initial invariant but is limited to polynomial programs. Furthermore, we show how conditional branches, a difficult issue in floating-point analysis, can be handled in our framework. Experimental results show that our algorithms outperform SOTA approaches in both the time efficiency and the precision of the generated invariants over a variety of benchmarks.