🤖 AI Summary
In scientific machine learning, high backend selection and switching costs—exacerbated by fragmented automatic differentiation (AD) interfaces—hinder both the differentiation efficiency and reusability of customized code. To address this, we propose DifferentiationInterface.jl, a unified AD frontend that integrates over ten backends—including Zygote, ForwardDiff, and ReverseDiff—via abstraction layers and metaprogramming to enable backend-agnostic differentiation calls. We introduce, for the first time, an integrated preprocessing mechanism that amortizes compilation and initialization overhead according to backend-specific characteristics, while automatically adapting advanced features (e.g., sparsity handling) transparently to users. This design substantially reduces AD system evaluation and migration costs. Empirically, it maintains competitive performance while significantly improving maintainability of differentiation modules, cross-backend comparability, and engineering efficiency in scientific computing workflows.
📝 Abstract
For scientific machine learning tasks with a lot of custom code, picking the right Automatic Differentiation (AD) system matters. Our Julia package DifferentiationInterface.jl provides a common frontend to a dozen AD backends, unlocking easy comparison and modular development. In particular, its built-in preparation mechanism leverages the strengths of each backend by amortizing one-time computations. This is key to enabling sophisticated features like sparsity handling without putting additional burdens on the user.