Learning with Exact Invariances in Polynomial Time

šŸ“… 2025-02-27
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
This work investigates the statistical–computational trade-off in achieving **exact geometric invariance** in kernel regression. Existing approaches—such as data augmentation and group averaging—fail to simultaneously guarantee polynomial-time complexity and strict invariance within the kernel framework. We propose the first polynomial-time algorithm that achieves **exact invariance** under oracle access while preserving the generalization performance of standard kernel regression. Our core innovation lies in formulating invariant learning as a convex quadratic program with infinitely many linear constraints, and solving it via geometrically informed kernel-space reconstruction and constrained optimization—integrating tools from differential geometry, spectral theory, and convex optimization. We theoretically establish that the algorithm attains the same excess risk bound as conventional kernel regression, thereby ensuring both statistical optimality and computational tractability.

Technology Category

Application Category

šŸ“ Abstract
We study the statistical-computational trade-offs for learning with exact invariances (or symmetries) using kernel regression. Traditional methods, such as data augmentation, group averaging, canonicalization, and frame-averaging, either fail to provide a polynomial-time solution or are not applicable in the kernel setting. However, with oracle access to the geometric properties of the input space, we propose a polynomial-time algorithm that learns a classifier with emph{exact} invariances. Moreover, our approach achieves the same excess population risk (or generalization error) as the original kernel regression problem. To the best of our knowledge, this is the first polynomial-time algorithm to achieve exact (not approximate) invariances in this context. Our proof leverages tools from differential geometry, spectral theory, and optimization. A key result in our development is a new reformulation of the problem of learning under invariances as optimizing an infinite number of linearly constrained convex quadratic programs, which may be of independent interest.
Problem

Research questions and friction points this paper is trying to address.

polynomial-time algorithm for exact invariances
kernel regression with exact symmetries
reducing generalization error in learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Polynomial-time algorithm for exact invariances
Kernel regression with geometric properties
Optimization of convex quadratic programs
šŸ”Ž Similar Papers
No similar papers found.