๐ค AI Summary
This work addresses the challenge of simultaneously achieving calibration, low regret, and multi-accuracy in online learning under arbitrarily time-varying data distributionsโa setting where existing methods struggle to balance these competing objectives. The authors propose a novel local adaptive mechanism that integrates a multi-objective optimization framework with adaptive online learning algorithms. Without requiring explicit definitions of local targets, their approach dynamically optimizes performance over contiguous subintervals, thereby circumventing the limitations of traditional global worst-case analyses. Empirical evaluations on energy forecasting and algorithmic fairness benchmarks demonstrate that the method significantly outperforms current state-of-the-art techniques, delivering unbiased predictions for subpopulations while maintaining robust multi-objective performance under distributional shifts.
๐ Abstract
We consider the general problem of learning a predictor that satisfies multiple objectives of interest simultaneously, a broad framework that captures a range of specific learning goals including calibration, regret, and multiaccuracy. We work in an online setting where the data distribution can change arbitrarily over time. Existing approaches to this problem aim to minimize the set of objectives over the entire time horizon in a worst-case sense, and in practice they do not necessarily adapt to distribution shifts. Earlier work has aimed to alleviate this problem by incorporating additional objectives that target local guarantees over contiguous subintervals. Empirical evaluation of these proposals is, however, scarce. In this article, we consider an alternative procedure that achieves local adaptivity by replacing one part of the multi-objective learning method with an adaptive online algorithm. Empirical evaluations on datasets from energy forecasting and algorithmic fairness show that our proposed method improves upon existing approaches and achieves unbiased predictions over subgroups, while remaining robust under distribution shift.