Enhancing Preference-based Linear Bandits via Human Response Time

๐Ÿ“… 2024-09-09
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In binary-preference learning, existing methods capture only preference direction but fail to quantify preference strength. To address this information bottleneck, we propose a psychophysiological modeling approach that incorporates human response times (RTs) into preference learning. Specifically, we introduce the EZ diffusion modelโ€”the first application of its kindโ€”into the preference-based linear bandits framework, enabling joint modeling of choice outcomes and RTs, as well as joint estimation of the underlying utility function. We theoretically establish that, under strong-preference regimes, RTs provide statistically significant additional information gain. In fixed-budget best-arm identification tasks, our method substantially accelerates convergence. Empirical evaluation across three real-world datasets demonstrates an average 37% reduction in utility estimation error and a marked improvement in best-arm identification accuracy.

Technology Category

Application Category

๐Ÿ“ Abstract
Interactive preference learning systems infer human preferences by presenting queries as pairs of options and collecting binary choices. Although binary choices are simple and widely used, they provide limited information about preference strength. To address this, we leverage human response times, which are inversely related to preference strength, as an additional signal. We propose a computationally efficient method that combines choices and response times to estimate human utility functions, grounded in the EZ diffusion model from psychology. Theoretical and empirical analyses show that for queries with strong preferences, response times complement choices by providing extra information about preference strength, leading to significantly improved utility estimation. We incorporate this estimator into preference-based linear bandits for fixed-budget best-arm identification. Simulations on three real-world datasets demonstrate that using response times significantly accelerates preference learning compared to choice-only approaches. Additional materials, such as code, slides, and talk video, are available at https://shenlirobot.github.io/pages/NeurIPS24.html
Problem

Research questions and friction points this paper is trying to address.

User Preference Intensity
Choice Test Accuracy
Binary Selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reaction Time
Linear Bandit Model
Preference Prediction
S
Shen Li
Massachusetts Institute of Technology
Yuyang Zhang
Yuyang Zhang
Graduate Student, Harvard University
Reinforcement LearningControl Theory
Zhaolin Ren
Zhaolin Ren
Graduate Student, Harvard University
Control and OptimizationReinforcement Learning
C
Claire Liang
Massachusetts Institute of Technology
N
Na Li
Harvard University
J
Julie A. Shah
Massachusetts Institute of Technology