🤖 AI Summary
This paper addresses the reliable detection of post-deployment performance degradation (PDD) in unlabeled model-serving scenarios. We formally define the PDD monitoring task as distinguishing benign distributional shifts from genuine performance deterioration. To this end, we propose D3M—a label-free, gradient-free monitoring framework that leverages predictive disagreement across multiple models. We theoretically establish its low false-positive rate under non-degrading shifts and provide sample-complexity guarantees. By unifying theoretical analysis with empirical risk estimation, D3M achieves significant improvements over state-of-the-art baselines on standard benchmarks and a large-scale real-world internal medicine dataset. Our method delivers a verifiable, automated alerting mechanism for performance degradation in high-stakes machine learning systems.
📝 Abstract
The distribution of data changes over time; models operating operating in dynamic environments need retraining. But knowing when to retrain, without access to labels, is an open challenge since some, but not all shifts degrade model performance. This paper formalizes and addresses the problem of post-deployment deterioration (PDD) monitoring. We propose D3M, a practical and efficient monitoring algorithm based on the disagreement of predictive models, achieving low false positive rates under non-deteriorating shifts and provides sample complexity bounds for high true positive rates under deteriorating shifts. Empirical results on both standard benchmark and a real-world large-scale internal medicine dataset demonstrate the effectiveness of the framework and highlight its viability as an alert mechanism for high-stakes machine learning pipelines.