RCCDA: Adaptive Model Updates in the Presence of Concept Drift under a Constrained Resource Budget

๐Ÿ“… 2025-05-30
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address concept drift adaptation under resource-constrained settings, this paper proposes a lightweight dynamic model update strategy that eliminates costly explicit drift detection. Instead, updates are triggered adaptively via online loss monitoring with an adjustable threshold. The key innovation is the first application of the Lyapunov driftโ€“penalty framework to concept drift adaptation, enabling rigorous theoretical derivation of strict upper bounds on update frequency and computational overhead, while simultaneously enforcing hard constraints on resource consumption (CPU, memory, latency) and model performance (inference accuracy). Extensive experiments across three cross-domain datasets demonstrate that the method significantly outperforms state-of-the-art drift detection baselines, achieving higher accuracy while strictly satisfying all hard resource constraints.

Technology Category

Application Category

๐Ÿ“ Abstract
Machine learning (ML) algorithms deployed in real-world environments are often faced with the challenge of adapting models to concept drift, where the task data distributions are shifting over time. The problem becomes even more difficult when model performance must be maintained under adherence to strict resource constraints. Existing solutions often depend on drift-detection methods that produce high computational overhead for resource-constrained environments, and fail to provide strict guarantees on resource usage or theoretical performance assurances. To address these shortcomings, we propose RCCDA: a dynamic model update policy that optimizes ML training dynamics while ensuring strict compliance to predefined resource constraints, utilizing only past loss information and a tunable drift threshold. In developing our policy, we analytically characterize the evolution of model loss under concept drift with arbitrary training update decisions. Integrating these results into a Lyapunov drift-plus-penalty framework produces a lightweight policy based on a measurable accumulated loss threshold that provably limits update frequency and cost. Experimental results on three domain generalization datasets demonstrate that our policy outperforms baseline methods in inference accuracy while adhering to strict resource constraints under several schedules of concept drift, making our solution uniquely suited for real-time ML deployments.
Problem

Research questions and friction points this paper is trying to address.

Adapting ML models to concept drift with resource constraints
Reducing computational overhead in drift-detection methods
Ensuring strict resource usage guarantees during model updates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic model update policy optimizes ML training
Uses past loss and tunable drift threshold
Lightweight policy limits update frequency and cost
๐Ÿ”Ž Similar Papers
No similar papers found.