Statistical Mechanics of Support Vector Regression

📅 2024-12-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the relationship between neural representation geometry and performance in continuous decoding tasks, focusing on how neural variability affects linear decodability and regression accuracy. We develop a mean-field theoretical model of ε-insensitive support vector regression (ε-SVR) grounded in statistical mechanics, deriving analytical learning curves that reveal a phase transition in training error at a critical load. Crucially, we identify for the first time an “ε-induced double descent” phenomenon: the ε-parameter simultaneously governs regression tolerance and acts as a tunable regularizer, enabling dynamic bias–variance trade-off control. Our theoretical predictions are validated on both analytically tractable toy models and deep neural networks. This work extends classical SVM theory beyond binary classification, establishing the first systematic integration of capacity theory and phase-transition analysis for continuous regression tasks incorporating intrinsic neural variability.

Technology Category

Application Category

📝 Abstract
A key problem in deep learning and computational neuroscience is relating the geometrical properties of neural representations to task performance. Here, we consider this problem for continuous decoding tasks where neural variability may affect task precision. Using methods from statistical mechanics, we study the average-case learning curves for $varepsilon$-insensitive Support Vector Regression ($varepsilon$-SVR) and discuss its capacity as a measure of linear decodability. Our analysis reveals a phase transition in the training error at a critical load, capturing the interplay between the tolerance parameter $varepsilon$ and neural variability. We uncover a double-descent phenomenon in the generalization error, showing that $varepsilon$ acts as a regularizer, both suppressing and shifting these peaks. Theoretical predictions are validated both on toy models and deep neural networks, extending the theory of Support Vector Machines to continuous tasks with inherent neural variability.
Problem

Research questions and friction points this paper is trying to address.

Relates neural geometry to task performance
Studies learning curves for Support Vector Regression
Examines phase transition and double-descent in errors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Statistical mechanics analyzes SVR learning curves
Phase transition in error at critical load
Double-descent phenomenon with epsilon regularization
🔎 Similar Papers
No similar papers found.