Adaptive teachers for amortized samplers

📅 2024-10-02
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the low sample efficiency and inadequate multimodal coverage in approximate inference for hard-to-sample unnormalized density distributions, this paper reformulates sampling as a sequential decision-making process and introduces a novel adaptive teacher-guided framework: dynamically identifying high-loss regions where the student sampler underperforms and actively constructing a progressive training curriculum. The method integrates reinforcement learning–based normalizing flows, off-policy training, auxiliary behavioral modeling, and amortized inference. Evaluated across synthetic exploration environments, two diffusion-based sampling tasks, and four biochemical discovery benchmarks, it achieves substantial improvements—averaging +37% in sample efficiency and +52% in mode coverage—while notably enhancing discovery of low-probability, high-reward modes.

Technology Category

Application Category

📝 Abstract
Amortized inference is the task of training a parametric model, such as a neural network, to approximate a distribution with a given unnormalized density where exact sampling is intractable. When sampling is implemented as a sequential decision-making process, reinforcement learning (RL) methods, such as generative flow networks, can be used to train the sampling policy. Off-policy RL training facilitates the discovery of diverse, high-reward candidates, but existing methods still face challenges in efficient exploration. We propose to use an adaptive training distribution (the Teacher) to guide the training of the primary amortized sampler (the Student) by prioritizing high-loss regions. The Teacher, an auxiliary behavior model, is trained to sample high-error regions of the Student and can generalize across unexplored modes, thereby enhancing mode coverage by providing an efficient training curriculum. We validate the effectiveness of this approach in a synthetic environment designed to present an exploration challenge, two diffusion-based sampling tasks, and four biochemical discovery tasks demonstrating its ability to improve sample efficiency and mode coverage.
Problem

Research questions and friction points this paper is trying to address.

Training parametric models for intractable distribution approximation
Improving exploration efficiency in reinforcement learning methods
Enhancing mode coverage through adaptive training distribution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive teacher guides student sampler training
Teacher samples high-loss regions for curriculum
Enhances mode coverage and sample efficiency
🔎 Similar Papers
No similar papers found.