Towards Fair In-Context Learning with Tabular Foundation Models

📅 2025-05-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a previously unexamined group bias in in-context learning (ICL) for tabular foundation models—arising from biased demonstration selection and unaddressed under the zero-parameter-update paradigm. To address this, the authors conduct the first systematic fairness analysis of tabular ICL and propose three preprocessing debiasing strategies: correlation removal, group-balanced sampling, and uncertainty-weighted demonstration selection. Notably, the uncertainty-driven selection method achieves significant improvements in group fairness without modifying model parameters. Evaluated across multiple real-world tabular datasets, it reduces Equalized Odds difference by 32.7% on average while preserving prediction accuracy. The implementation is publicly released to support reproducible, fair ICL research.

Technology Category

Application Category

📝 Abstract
Tabular foundational models have exhibited strong in-context learning (ICL) capabilities on structured data, allowing them to make accurate predictions on test sets without parameter updates, using training examples as context. This emerging approach positions itself as a competitive alternative to traditional gradient-boosted tree methods. However, while biases in conventional machine learning models are well documented, it remains unclear how these biases manifest in tabular ICL. The paper investigates the fairness implications of tabular ICL and explores three preprocessing strategies--correlation removal, group-balanced demonstration selection, and uncertainty-based demonstration selection--to address bias. Comprehensive experiments indicate that uncertainty-based demonstration selection consistently enhances group fairness of in-context predictions. The source code for reproducing the results of this work can be found at https://github.com/patrikken/Fair-TabICL.
Problem

Research questions and friction points this paper is trying to address.

Investigates fairness implications of tabular in-context learning
Explores preprocessing strategies to address bias in ICL
Evaluates uncertainty-based selection for enhancing group fairness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses tabular foundation models for in-context learning
Explores three preprocessing strategies to address bias
Uncertainty-based demonstration selection enhances fairness
🔎 Similar Papers
No similar papers found.