๐ค AI Summary
Multi-level sampling methods (e.g., multilevel Monte Carlo, multifidelity Monte Carlo) suffer from computational redundancy at coarse levels.
Method: This work introduces, for the first time, a modeling framework that treats numerical inexactness as an *optimizable resource dimension*, enabling adaptive control coupling discretization level, accuracy, and error. The approach integrates low-precision sparse direct solvers, iterative refinement, MINRES, and multilevel stochastic sampling theory to jointly model error propagation and computational complexity.
Contribution/Results: Evaluated on elliptic PDEs with random coefficients, the method reduces memory traffic by 3.5ร and floating-point operations by 1.5ร, significantly improving energy efficiency. It establishes a foundation for energy-aware scientific computing, advancing the paradigm toward computation-energy co-design.
๐ Abstract
Multilevel sampling methods, such as multilevel and multifidelity Monte Carlo, multilevel stochastic collocation, or delayed acceptance Markov chain Monte Carlo, have become standard uncertainty quantification tools for a wide class of forward and inverse problems. The underlying idea is to achieve faster convergence by leveraging a hierarchy of models, such as partial differential equation (PDE) or stochastic differential equation (SDE) discretisations with increasing accuracy. By optimally redistributing work among the levels, multilevel methods can achieve significant performance improvement compared to single level methods working with one high-fidelity model. Intuitively, approximate solutions on coarser levels can tolerate large computational error without affecting the overall accuracy. We show how this can be used in high-performance computing applications to obtain a significant performance gain. As a use case, we analyse the computational error in the standard multilevel Monte Carlo method and formulate an adaptive algorithm which determines a minimum required computational accuracy on each level of discretisation. We show two examples of how the inexactness can be converted into actual gains using an elliptic PDE with lognormal random coefficients. Using a low precision sparse direct solver combined with iterative refinement results in a simulated gain in memory references of up to $3.5 imes$ compared to the reference double precision solver; while using a MINRES iterative solver, a practical speedup of up to $1.5 imes$ in terms of FLOPs is achieved. These results provide a step in the direction of energy-aware scientific computing, with significant potential for energy savings.