AdaCap: An Adaptive Contrastive Approach for Small-Data Neural Networks

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural networks exhibit poor generalization on few-shot tabular data, where tree-based models remain dominant. This paper proposes AdaCap, a novel training framework that enhances residual network performance through three key innovations: (1) a permutation-based contrastive loss to improve representation robustness under low-data regimes; (2) a closed-form Tikhonov-regularized output mapping, enabling stable and analytically tractable optimization of the prediction layer; and (3) a lightweight meta-predictor that adaptively selects regularization strategies based on dataset characteristics. Extensive evaluation across 85 real-world regression benchmarks demonstrates that AdaCap consistently outperforms state-of-the-art methods—particularly in ultra-low-data settings (<1,000 samples)—with substantial gains in predictive accuracy. The framework is computationally efficient, theoretically grounded, and empirically validated. All code and comprehensive experimental results are publicly available.

Technology Category

Application Category

📝 Abstract
Neural networks struggle on small tabular datasets, where tree-based models remain dominant. We introduce Adaptive Contrastive Approach (AdaCap), a training scheme that combines a permutation-based contrastive loss with a Tikhonov-based closed-form output mapping. Across 85 real-world regression datasets and multiple architectures, AdaCap yields consistent and statistically significant improvements in the small-sample regime, particularly for residual models. A meta-predictor trained on dataset characteristics (size, skewness, noise) accurately anticipates when AdaCap is beneficial. These results show that AdaCap acts as a targeted regularization mechanism, strengthening neural networks precisely where they are most fragile. All results and code are publicly available at https://github.com/BrunoBelucci/adacap.
Problem

Research questions and friction points this paper is trying to address.

Improves neural network performance on small tabular datasets
Addresses neural network fragility through adaptive contrastive regularization
Enhances residual models in small-sample regression scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Permutation-based contrastive loss for small datasets
Tikhonov-based closed-form output mapping
Targeted regularization mechanism for neural networks
🔎 Similar Papers
No similar papers found.