Distributionally robust self-supervised learning for tabular data

📅 2024-10-11
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods for tabular data lack robust representation learning for error-prone subpopulations (error slices), particularly under distributional shifts. Method: This paper introduces the first distributionally robust self-supervised pretraining framework for tabular data. Departing from prior work focused on supervised learning or vision domains, it jointly integrates Just Train Twice (JTT) and Deep Feature Reweighting (DFR) during masked language modeling (MLM) pretraining to enable sample reweighting and slice balancing for high-cardinality feature reconstruction. Additionally, it proposes class-specific encoder-decoder architectures and an ensemble paradigm. Results: Evaluated on multiple benchmark tabular datasets, the method significantly improves worst-group accuracy (+3.2–7.8%) and consistently outperforms empirical risk minimization (ERM) and state-of-the-art robust baselines across downstream classification tasks in both robustness and generalization.

Technology Category

Application Category

📝 Abstract
Machine learning (ML) models trained using Empirical Risk Minimization (ERM) often exhibit systematic errors on specific subpopulations of tabular data, known as error slices. Learning robust representation in presence of error slices is challenging, especially in self-supervised settings during the feature reconstruction phase, due to high cardinality features and the complexity of constructing error sets. Traditional robust representation learning methods are largely focused on improving worst group performance in supervised setting in computer vision, leaving a gap in approaches tailored for tabular data. We address this gap by developing a framework to learn robust representation in tabular data during self-supervised pre-training. Our approach utilizes an encoder-decoder model trained with Masked Language Modeling (MLM) loss to learn robust latent representations. This paper applies the Just Train Twice (JTT) and Deep Feature Reweighting (DFR) methods during the pre-training phase for tabular data. These methods fine-tune the ERM pre-trained model by up-weighting error-prone samples or creating balanced datasets for specific categorical features. This results in specialized models for each feature, which are then used in an ensemble approach to enhance downstream classification performance. This methodology improves robustness across slices, thus enhancing overall generalization performance. Extensive experiments across various datasets demonstrate the efficacy of our approach. The code is available: https://github.com/amazon-science/distributionally-robust-self-supervised-learning-for-tabular-data.
Problem

Research questions and friction points this paper is trying to address.

Address systematic errors on specific tabular data subpopulations
Learn robust representations in self-supervised tabular learning
Improve robustness across error slices for better generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses encoder-decoder with MLM loss
Applies JTT and DFR methods
Ensembles specialized models per feature
🔎 Similar Papers
No similar papers found.