🤖 AI Summary
Supervised learning for inverse problems faces prohibitive data costs when high-dimensional priors or stringent accuracy requirements necessitate large training datasets.
Method: We propose an instance-wise adaptive sampling framework that centers sampling on the target instance, dynamically focusing on the local geometric structure of the inverse mapping in its neighborhood—enabling instance-dependent sample selection. By integrating supervised learning with iterative optimization, the method leverages both prior distributions and current predictions to guide the generation of new informative samples, supporting conditional, dynamic dataset updates.
Results: Evaluated on two structured-prior scattering inverse problems, our approach significantly reduces sample complexity compared to fixed-dataset training. It achieves substantial improvements in both reconstruction accuracy and learning efficiency—particularly under high-dimensional, complex priors and high-precision regimes.
📝 Abstract
We propose an instance-wise adaptive sampling framework for constructing compact and informative training datasets for supervised learning of inverse problem solutions. Typical learning-based approaches aim to learn a general-purpose inverse map from datasets drawn from a prior distribution, with the training process independent of the specific test instance. When the prior has a high intrinsic dimension or when high accuracy of the learned solution is required, a large number of training samples may be needed, resulting in substantial data collection costs. In contrast, our method dynamically allocates sampling effort based on the specific test instance, enabling significant gains in sample efficiency. By iteratively refining the training dataset conditioned on the latest prediction, the proposed strategy tailors the dataset to the geometry of the inverse map around each test instance. We demonstrate the effectiveness of our approach in the inverse scattering problem under two types of structured priors. Our results show that the advantage of the adaptive method becomes more pronounced in settings with more complex priors or higher accuracy requirements. While our experiments focus on a particular inverse problem, the adaptive sampling strategy is broadly applicable and readily extends to other inverse problems, offering a scalable and practical alternative to conventional fixed-dataset training regimes.