Kernel Learning with Adversarial Features: Numerical Efficiency and Adaptive Regularization

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing adversarial training suffers from computationally expensive min-max optimization. This paper proposes a novel framework that transfers adversarial perturbations from the input space into the feature space of a reproducing kernel Hilbert space (RKHS), enabling analytical solution of the inner maximization and efficient overall optimization. Our core contribution is an adaptive adversarial perturbation mechanism in feature space, whose regularization strength automatically adapts to data noise levels and model function smoothness, and naturally extends to multi-kernel learning. The method integrates RKHS theory, adversarial feature modeling, and iterative kernel ridge regression, and provides a rigorous generalization error bound. Experiments demonstrate significant improvements in both robustness under adversarial attacks and generalization on clean test data, while substantially reducing training overhead compared to standard adversarial training.

Technology Category

Application Category

📝 Abstract
Adversarial training has emerged as a key technique to enhance model robustness against adversarial input perturbations. Many of the existing methods rely on computationally expensive min-max problems that limit their application in practice. We propose a novel formulation of adversarial training in reproducing kernel Hilbert spaces, shifting from input to feature-space perturbations. This reformulation enables the exact solution of inner maximization and efficient optimization. It also provides a regularized estimator that naturally adapts to the noise level and the smoothness of the underlying function. We establish conditions under which the feature-perturbed formulation is a relaxation of the original problem and propose an efficient optimization algorithm based on iterative kernel ridge regression. We provide generalization bounds that help to understand the properties of the method. We also extend the formulation to multiple kernel learning. Empirical evaluation shows good performance in both clean and adversarial settings.
Problem

Research questions and friction points this paper is trying to address.

Enhancing model robustness against adversarial input perturbations efficiently
Solving computationally expensive min-max problems in adversarial training
Providing adaptive regularization for noise level and function smoothness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial training in kernel Hilbert spaces
Feature-space perturbations replace input perturbations
Efficient optimization via iterative kernel ridge regression