IBCB: Efficient Inverse Batched Contextual Bandit for Behavioral Evolution History

📅 2024-03-24
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In streaming decision-making scenarios, expert behavior dynamically evolves—from novice to expert—rendering conventional imitation learning ineffective due to its reliance on stationary interaction histories. Method: This paper proposes the Inverse Batch Contextual Bandit (IBCB) framework, which for the first time models behavioral evolution trajectories as an inverse problem under latent rewards. IBCB jointly estimates environment reward parameters and policies via quadratic programming, supporting both deterministic and stochastic policies while relaxing the constraint of requiring only expert-level demonstrations. Contribution/Results: Evaluated on synthetic and real-world datasets, IBCB significantly outperforms state-of-the-art imitation learning methods in policy quality, achieves substantial computational efficiency gains, and exhibits strong out-of-distribution generalization—enabling effective learning of high-quality policies from mixed-proficiency interaction histories.

Technology Category

Application Category

📝 Abstract
Traditional imitation learning focuses on modeling the behavioral mechanisms of experts, which requires a large amount of interaction history generated by some fixed expert. However, in many streaming applications, such as streaming recommender systems, online decision-makers typically engage in online learning during the decision-making process, meaning that the interaction history generated by online decision-makers includes their behavioral evolution from novice expert to experienced expert. This poses a new challenge for existing imitation learning approaches that can only utilize data from experienced experts. To address this issue, this paper proposes an inverse batched contextual bandit (IBCB) framework that can efficiently perform estimations of environment reward parameters and learned policy based on the expert's behavioral evolution history. Specifically, IBCB formulates the inverse problem into a simple quadratic programming problem by utilizing the behavioral evolution history of the batched contextual bandit with inaccessible rewards. We demonstrate that IBCB is a unified framework for both deterministic and randomized bandit policies. The experimental results indicate that IBCB outperforms several existing imitation learning algorithms on synthetic and real-world data and significantly reduces running time. Additionally, empirical analyses reveal that IBCB exhibits better out-of-distribution generalization and is highly effective in learning the bandit policy from the interaction history of novice experts.
Problem

Research questions and friction points this paper is trying to address.

Modeling behavioral evolution from novice to expert
Handling streaming data with inaccessible rewards
Improving imitation learning efficiency and generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Inverse batched contextual bandit framework
Quadratic programming for reward estimation
Handles behavioral evolution history efficiently
🔎 Similar Papers
No similar papers found.
Y
Yi Xu
Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China
Weiran Shen
Weiran Shen
Renmin University of China
Game TheoryAuctionMechanism DesignMulti-agent SystemMachine Learning.
X
Xiao Zhang
Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China
J
Jun Xu
Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China