Geometric Properties of Neural Multivariate Regression

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies the fundamental cause of “feature collapse” in neural multivariate regression: unlike classification, regression requires the intrinsic dimensionality of learned representations to match that of the target space; collapse induces over-compression, degrading structural fidelity to the regression targets. To address this, we propose a geometric analysis framework grounded in intrinsic dimension estimation, which quantifies the spatial structural alignment between final-layer features and regression targets. We identify two harmful regimes—over-compression and under-compression—and derive a dimension-matching criterion to guide feature compression. Experiments across control, financial, and synthetic datasets demonstrate substantial improvements in regression accuracy, with markedly enhanced robustness in low-sample and high-noise regimes. This work establishes the first interpretable geometric perspective on neural regression and introduces a principled, actionable paradigm for intrinsic dimension optimization.

Technology Category

Application Category

📝 Abstract
Neural multivariate regression underpins a wide range of domains such as control, robotics, and finance, yet the geometry of its learned representations remains poorly characterized. While neural collapse has been shown to benefit generalization in classification, we find that analogous collapse in regression consistently degrades performance. To explain this contrast, we analyze models through the lens of intrinsic dimension. Across control tasks and synthetic datasets, we estimate the intrinsic dimension of last-layer features (ID_H) and compare it with that of the regression targets (ID_Y). Collapsed models exhibit ID_HID_Y. For the non-collapsed models, performance with respect to ID_H depends on the data quantity and noise levels. From these observations, we identify two regimes (over-compressed and under-compressed) that determine when expanding or reducing feature dimensionality improves performance. Our results provide new geometric insights into neural regression and suggest practical strategies for enhancing generalization.
Problem

Research questions and friction points this paper is trying to address.

Analyzing geometric collapse in neural regression and its impact on generalization
Investigating intrinsic dimension mismatch between features and regression targets
Identifying over-compressed and under-compressed regimes for performance optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Intrinsic dimension analysis for neural regression geometry
Identify over-compressed regime with ID_H less than ID_Y
Propose feature dimensionality adjustment for generalization enhancement