🤖 AI Summary
This work addresses the challenge of label inference degradation in non-stationary environments, where weak supervision sources (e.g., crowd annotations, heuristic rules) exhibit time-varying accuracy due to concept drift. We propose an adaptive sliding window mechanism that requires no prior assumptions about drift patterns. Our method online estimates the real-time accuracy of each weak source and dynamically selects the optimal window size via an explicit bias–variance trade-off, balancing responsiveness to drift against statistical stability. Unlike conventional approaches relying on fixed windows or explicit drift detection, ours is the first to integrate non-stationary statistical inference directly into the weak supervision learning framework. Experiments on synthetic and real-world crowdsourced datasets demonstrate strong robustness across diverse drift patterns—including abrupt, gradual, and periodic drift—and yield significant improvements in final label inference accuracy.
📝 Abstract
We introduce an adaptive method with formal quality guarantees for weak supervision in a non-stationary setting. Our goal is to infer the unknown labels of a sequence of data by using weak supervision sources that provide independent noisy signals of the correct classification for each data point. This setting includes crowdsourcing and programmatic weak supervision. We focus on the non-stationary case, where the accuracy of the weak supervision sources can drift over time, e.g., because of changes in the underlying data distribution. Due to the drift, older data could provide misleading information to infer the label of the current data point. Previous work relied on a priori assumptions on the magnitude of the drift to decide how much data to use from the past. In contrast, our algorithm does not require any assumptions on the drift, and it adapts based on the input by dynamically varying its window size. In particular, at each step, our algorithm estimates the current accuracies of the weak supervision sources by identifying a window of past observations that guarantees a near-optimal minimization of the trade-off between the error due to the variance of the estimation and the error due to the drift. Experiments on synthetic and real-world labelers show that our approach adapts to the drift.