MAVRL: Learning Reward Functions from Multiple Feedback Types with Amortized Variational Inference

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a unified framework for jointly learning a reward function from heterogeneous feedback—such as demonstrations, comparisons, ratings, and stop signals—without requiring manually designed loss weights or multi-stage fusion strategies. By modeling all feedback types as Bayesian observations of a shared latent reward function, each modality contributes through an explicit likelihood term, enabling end-to-end training within a scalable amortized variational inference framework. This approach is the first to achieve semantically consistent joint reward learning across diverse feedback sources while simultaneously providing interpretable uncertainty estimates over the learned reward. Experiments on both discrete and continuous control tasks demonstrate that the resulting posterior reward distribution outperforms single-feedback baselines, effectively integrates complementary information, and significantly enhances policy robustness to environmental perturbations.

Technology Category

Application Category

📝 Abstract
Reward learning typically relies on a single feedback type or combines multiple feedback types using manually weighted loss terms. Currently, it remains unclear how to jointly learn reward functions from heterogeneous feedback types such as demonstrations, comparisons, ratings, and stops that provide qualitatively different signals. We address this challenge by formulating reward learning from multiple feedback types as Bayesian inference over a shared latent reward function, where each feedback type contributes information through an explicit likelihood. We introduce a scalable amortized variational inference approach that learns a shared reward encoder and feedback-specific likelihood decoders and is trained by optimizing a single evidence lower bound. Our approach avoids reducing feedback to a common intermediate representation and eliminates the need for manual loss balancing. Across discrete and continuous-control benchmarks, we show that jointly inferred reward posteriors outperform single-type baselines, exploit complementary information across feedback types, and yield policies that are more robust to environment perturbations. The inferred reward uncertainty further provides interpretable signals for analyzing model confidence and consistency across feedback types.
Problem

Research questions and friction points this paper is trying to address.

reward learning
heterogeneous feedback
multiple feedback types
Bayesian inference
reward function
Innovation

Methods, ideas, or system contributions that make the work stand out.

reward learning
amortized variational inference
Bayesian inference
heterogeneous feedback
multi-type feedback
🔎 Similar Papers
No similar papers found.