QuAIL: Quality-Aware Inertial Learning for Robust Training under Data Corruption

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of robust training on tabular data suffering from heterogeneous corruptions—such as noise, missing values, and feature bias—when only column-level reliability indicators are available. The authors propose a quality-aware training mechanism that uniquely integrates column-wise reliability priors directly into the optimization dynamics. By jointly optimizing a learnable feature modulation layer and a quality-dependent proximal regularizer, the method adaptively adjusts the contribution of features according to their trustworthiness, without requiring explicit data imputation or sample reweighting. Extensive experiments across 50 classification and regression datasets demonstrate that the proposed approach significantly outperforms existing baselines, exhibiting exceptional robustness particularly in low-data regimes and under systematic feature biases.

Technology Category

Application Category

📝 Abstract
Tabular machine learning systems are frequently trained on data affected by non-uniform corruption, including noisy measurements, missing entries, and feature-specific biases. In practice, these defects are often documented only through column-level reliability indicators rather than instance-wise quality annotations, limiting the applicability of many robustness and cleaning techniques. We present QuAIL, a quality-informed training mechanism that incorporates feature reliability priors directly into the learning process. QuAIL augments existing models with a learnable feature-modulation layer whose updates are selectively constrained by a quality-dependent proximal regularizer, thereby inducing controlled adaptation across features of varying trustworthiness. This stabilizes optimization under structured corruption without explicit data repair or sample-level reweighting. Empirical evaluation across 50 classification and regression datasets demonstrates that QuAIL consistently improves average performance over neural baselines under both random and value-dependent corruption, with especially robust behavior in low-data and systematically biased settings. These results suggest that incorporating feature reliability information directly into optimization dynamics is a practical and effective approach for resilient tabular learning.
Problem

Research questions and friction points this paper is trying to address.

data corruption
tabular machine learning
feature reliability
non-uniform corruption
quality-aware learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

quality-aware learning
feature reliability
proximal regularization
tabular data robustness
inertial learning
🔎 Similar Papers
No similar papers found.