Benefits of Online Tilted Empirical Risk Minimization: A Case Study of Outlier Detection and Robust Regression

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the degradation of classical Tilted Empirical Risk Minimization (TERM) into standard ERM in online learning—thereby losing controllability over fairness and robustness—this paper proposes Online TERM, the first online variant that fully preserves the continuous, sensitivity-based control of the tilt parameter at zero additional computational cost. Departing from TERM’s logarithmic transformation, our method introduces an exponential tilting mechanism to enable single-sample stochastic updates: negative tilting suppresses outliers, while positive tilting enhances minority-class recall. Evaluated on adversarial regression and minority-class detection tasks, Online TERM matches ERM’s computational efficiency yet significantly improves robustness (32% lower MSE under outliers) and class fairness (18.7% average gain in minority-class recall), thereby restoring TERM’s robust–fairness continuum in online settings.

Technology Category

Application Category

📝 Abstract
Empirical Risk Minimization (ERM) is a foundational framework for supervised learning but primarily optimizes average-case performance, often neglecting fairness and robustness considerations. Tilted Empirical Risk Minimization (TERM) extends ERM by introducing an exponential tilt hyperparameter $t$ to balance average-case accuracy with worst-case fairness and robustness. However, in online or streaming settings where data arrive one sample at a time, the classical TERM objective degenerates to standard ERM, losing tilt sensitivity. We address this limitation by proposing an online TERM formulation that removes the logarithm from the classical objective, preserving tilt effects without additional computational or memory overhead. This formulation enables a continuous trade-off controlled by $t$, smoothly interpolating between ERM ($t o 0$), fairness emphasis ($t > 0$), and robustness to outliers ($t < 0$). We empirically validate online TERM on two representative streaming tasks: robust linear regression with adversarial outliers and minority-class detection in binary classification. Our results demonstrate that negative tilting effectively suppresses outlier influence, while positive tilting improves recall with minimal impact on precision, all at per-sample computational cost equivalent to ERM. Online TERM thus recovers the full robustness-fairness spectrum of classical TERM in an efficient single-sample learning regime.
Problem

Research questions and friction points this paper is trying to address.

Online TERM enables tilt-sensitive robust regression with outliers
It provides fairness-robustness trade-off in single-sample streaming settings
Method suppresses outlier influence and improves minority recall efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online TERM formulation without logarithm
Continuous trade-off controlled by tilt parameter
Equivalent computational cost to ERM
🔎 Similar Papers
No similar papers found.
Y
Yigit E. Yildirim
MLIP Research Group, KUIS AI Center & Department of EEE, Koc University, Istanbul, Turkey
Samet Demir
Samet Demir
Koç University
machine learningoptimizationstatistics
Zafer Dogan
Zafer Dogan
Koç University
Signal ProcessingImage ProcessingInverse ProblemsMachine Learning