🤖 AI Summary
Neural networks exhibit poor generalization on few-shot tabular data, where tree-based models remain dominant. This paper proposes AdaCap, a novel training framework that enhances residual network performance through three key innovations: (1) a permutation-based contrastive loss to improve representation robustness under low-data regimes; (2) a closed-form Tikhonov-regularized output mapping, enabling stable and analytically tractable optimization of the prediction layer; and (3) a lightweight meta-predictor that adaptively selects regularization strategies based on dataset characteristics. Extensive evaluation across 85 real-world regression benchmarks demonstrates that AdaCap consistently outperforms state-of-the-art methods—particularly in ultra-low-data settings (<1,000 samples)—with substantial gains in predictive accuracy. The framework is computationally efficient, theoretically grounded, and empirically validated. All code and comprehensive experimental results are publicly available.
📝 Abstract
Neural networks struggle on small tabular datasets, where tree-based models remain dominant. We introduce Adaptive Contrastive Approach (AdaCap), a training scheme that combines a permutation-based contrastive loss with a Tikhonov-based closed-form output mapping. Across 85 real-world regression datasets and multiple architectures, AdaCap yields consistent and statistically significant improvements in the small-sample regime, particularly for residual models. A meta-predictor trained on dataset characteristics (size, skewness, noise) accurately anticipates when AdaCap is beneficial. These results show that AdaCap acts as a targeted regularization mechanism, strengthening neural networks precisely where they are most fragile. All results and code are publicly available at https://github.com/BrunoBelucci/adacap.