Mixed Likelihood Variational Gaussian Processes

📅 2025-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Gaussian processes (GPs) offer well-calibrated uncertainty estimates in human-in-the-loop experiments but often neglect auxiliary human inputs—such as domain priors, Likert-scale confidence ratings, or physical constraints—leading to biased uncertainty modeling. To address this, we propose a hybrid-likelihood variational Gaussian process framework that jointly models task responses and heterogeneous auxiliary signals (categorical, regression, and constraint-based likelihoods) within a unified evidence lower bound (ELBO) objective, enabling end-to-end variational inference. Our method integrates human feedback modeling, active learning, and preference learning, thereby enhancing generalization and convergence speed. Evaluated on three real-world human studies—VR-based visual error detection, haptic roughness perception, and robotic gait preference learning—the approach significantly improves model fidelity and active sampling efficiency. Results demonstrate the effectiveness and broad applicability of cross-modal auxiliary information fusion for uncertainty-aware human-centered learning.

Technology Category

Application Category

📝 Abstract
Gaussian processes (GPs) are powerful models for human-in-the-loop experiments due to their flexibility and well-calibrated uncertainty. However, GPs modeling human responses typically ignore auxiliary information, including a priori domain expertise and non-task performance information like user confidence ratings. We propose mixed likelihood variational GPs to leverage auxiliary information, which combine multiple likelihoods in a single evidence lower bound to model multiple types of data. We demonstrate the benefits of mixing likelihoods in three real-world experiments with human participants. First, we use mixed likelihood training to impose prior knowledge constraints in GP classifiers, which accelerates active learning in a visual perception task where users are asked to identify geometric errors resulting from camera position errors in virtual reality. Second, we show that leveraging Likert scale confidence ratings by mixed likelihood training improves model fitting for haptic perception of surface roughness. Lastly, we show that Likert scale confidence ratings improve human preference learning in robot gait optimization. The modeling performance improvements found using our framework across this diverse set of applications illustrates the benefits of incorporating auxiliary information into active learning and preference learning by using mixed likelihoods to jointly model multiple inputs.
Problem

Research questions and friction points this paper is trying to address.

Incorporates auxiliary information into Gaussian processes for human-in-the-loop experiments.
Improves active learning in visual perception tasks using prior knowledge constraints.
Enhances model fitting and preference learning with Likert scale confidence ratings.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixed likelihood variational Gaussian processes model
Combine multiple likelihoods for diverse data types
Enhance active learning with auxiliary information
🔎 Similar Papers
No similar papers found.