PENEX: AdaBoost-Inspired Neural Network Regularization

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that the exponential loss function in multiclass AdaBoost is ill-suited for first-order optimization. To resolve this, we propose PENEX—a novel differentiable and optimizable multiclass exponential loss function. PENEX preserves core AdaBoost properties, including margin maximization and strong generalization performance, while enabling standard gradient descent optimization—thus facilitating seamless integration of AdaBoost principles into deep neural networks. Theoretically, PENEX implicitly enforces strong regularization and parameterizes the architecture of weak learners. Empirically, it achieves significant improvements over state-of-the-art regularization methods on diverse computer vision and natural language processing benchmarks, with comparable computational overhead. These results demonstrate PENEX’s practicality, scalability, and robust generalization across modalities.

Technology Category

Application Category

📝 Abstract
AdaBoost sequentially fits so-called weak learners to minimize an exponential loss, which penalizes mislabeled data points more severely than other loss functions like cross-entropy. Paradoxically, AdaBoost generalizes well in practice as the number of weak learners grows. In the present work, we introduce Penalized Exponential Loss (PENEX), a new formulation of the multi-class exponential loss that is theoretically grounded and, in contrast to the existing formulation, amenable to optimization via first-order methods. We demonstrate both empirically and theoretically that PENEX implicitly maximizes margins of data points. Also, we show that gradient increments on PENEX implicitly parameterize weak learners in the boosting framework. Across computer vision and language tasks, we show that PENEX exhibits a regularizing effect often better than established methods with similar computational cost. Our results highlight PENEX's potential as an AdaBoost-inspired alternative for effective training and fine-tuning of deep neural networks.
Problem

Research questions and friction points this paper is trying to address.

Introduces a theoretically grounded multi-class exponential loss function
Demonstrates implicit margin maximization and weak learner parameterization
Provides effective regularization for deep neural network training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Penalized Exponential Loss for neural network regularization
Implicitly maximizes margins of data points
Gradient increments parameterize weak learners implicitly
🔎 Similar Papers
No similar papers found.