🤖 AI Summary
This work addresses the efficient and robust computation of gradients for numerical solutions of differential equations. We systematically survey four differentiable programming paradigms—adjoint methods, automatic differentiation (via source-to-source transformation and operator overloading), numerical perturbation, and symbolic-numeric hybrid approaches—and introduce, for the first time, a unified differentiability framework that bridges inverse problem solving and machine learning methodologies. We establish a cross-method comparative taxonomy and provide platform-specific best-practice guidelines for scientific computing libraries including SciPy, JAX, and TorchDiffeq. Our analysis rigorously characterizes trade-offs among accuracy, memory footprint, computational complexity, and applicability domains for each method. The results deliver both theoretical foundations and practical implementation pathways for differential-equation–data fusion modeling tasks, including parameter inversion, sensitivity analysis, and physics-informed neural networks (PINNs).
📝 Abstract
The differentiable programming paradigm is a cornerstone of modern scientific computing. It refers to numerical methods for computing the gradient of a numerical model's output. Many scientific models are based on differential equations, where differentiable programming plays a crucial role in calculating model sensitivities, inverting model parameters, and training hybrid models that combine differential equations with data-driven approaches. Furthermore, recognizing the strong synergies between inverse methods and machine learning offers the opportunity to establish a coherent framework applicable to both fields. Differentiating functions based on the numerical solution of differential equations is non-trivial. Numerous methods based on a wide variety of paradigms have been proposed in the literature, each with pros and cons specific to the type of problem investigated. Here, we provide a comprehensive review of existing techniques to compute derivatives of numerical solutions of differential equations. We first discuss the importance of gradients of solutions of differential equations in a variety of scientific domains. Second, we lay out the mathematical foundations of the various approaches and compare them with each other. Third, we cover the computational considerations and explore the solutions available in modern scientific software. Last but not least, we provide best-practices and recommendations for practitioners. We hope that this work accelerates the fusion of scientific models and data, and fosters a modern approach to scientific modelling.