xInv: Explainable Optimization of Inverse Problems

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Inverse problem optimization lacks interpretability for domain experts, hindering trustworthy deployment in critical fields such as healthcare and climate science. To address this, we propose a novel interpretability framework that instruments differentiable simulators to capture forward- and backward-propagation events along the optimization trajectory, then encodes these events as structured natural language descriptions. A large language model (LLM) subsequently maps these descriptions to domain-specific abstractions, generating human-understandable explanations of optimization behavior. This is the first method to directly translate iterative optimization trajectories into domain-level natural language explanations. Experiments on neural network training and canonical inverse problems demonstrate significant improvements in explanation accuracy and expert acceptance, effectively bridging the interpretability gap between low-level optimizer dynamics and high-level domain semantics.

Technology Category

Application Category

📝 Abstract
Inverse problems are central to a wide range of fields, including healthcare, climate science, and agriculture. They involve the estimation of inputs, typically via iterative optimization, to some known forward model so that it produces a desired outcome. Despite considerable development in the explainability and interpretability of forward models, the iterative optimization of inverse problems remains largely cryptic to domain experts. We propose a methodology to produce explanations, from traces produced by an optimizer, that are interpretable by humans at the abstraction of the domain. The central idea in our approach is to instrument a differentiable simulator so that it emits natural language events during its forward and backward passes. In a post-process, we use a Language Model to create an explanation from the list of events. We demonstrate the effectiveness of our approach with an illustrative optimization problem and an example involving the training of a neural network.
Problem

Research questions and friction points this paper is trying to address.

Explain iterative optimization in inverse problems
Make optimizer traces interpretable for domain experts
Generate human-readable explanations using language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentiable simulator emits natural language events
Language Model creates explanations from event traces
Explainable optimization for inverse problems via instrumentation
🔎 Similar Papers
No similar papers found.
S
Sean Memery
School of Informatics, University of Edinburgh, Scotland
K
Kevin Denamganai
School of Informatics, University of Edinburgh, Scotland
A
Anna Kapron-King
School of Informatics, University of Edinburgh, Scotland
Kartic Subr
Kartic Subr
University of Edinburgh
samplingroboticscomputer graphics