🤖 AI Summary
This paper studies the multi-objective contextual bandit problem under distributional shift, where rewards are vector-valued and user preferences are encoded by a given preference cone. To handle dynamic environments, we propose a preference-cone-based vector ranking mechanism, coupled with adaptive discretization and optimistic elimination to enable real-time adaptation to unknown drifts. We introduce a Pareto-frontier distance to define preference-aware regret and establish a unified theoretical framework, yielding tight regret upper bounds under both slow and abrupt drift assumptions. Crucially, our bound recovers existing optimal results in the no-drift or single-objective settings and smoothly scales with dimensionality and drift magnitude. Experiments demonstrate the effectiveness and robustness of our approach in dynamic multi-objective environments.
📝 Abstract
We consider contextual bandit learning under distribution shift when reward vectors are ordered according to a given preference cone. We propose an adaptive-discretization and optimistic elimination based policy that self-tunes to the underlying distribution shift. To measure the performance of this policy, we introduce the notion of preference-based regret which measures the performance of a policy in terms of distance between Pareto fronts. We study the performance of this policy by establishing upper bounds on its regret under various assumptions on the nature of distribution shift. Our regret bounds generalize known results for the existing case of no distribution shift and vectorial reward settings, and scale gracefully with problem parameters in presence of distribution shifts.