Geometrically Constrained Outlier Synthesis

πŸ“… 2026-03-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the overconfidence of deep neural networks on out-of-distribution (OOD) samples, which undermines model reliability. The authors propose a training-time regularization framework that enhances OOD discrimination by generating geometrically constrained synthetic anomalies outside the feature manifold structure and integrating contrastive regularization. Innovatively combining manifold geometry with adaptive margin control, the method synthesizes informative OOD examples and introduces a conformal inference mechanism grounded in the dominant variance subspace, nonconformity score quantile shells, and Mahalanobis or energy-based scoring. This approach provides formal error guarantees while significantly outperforming existing methods on near-OOD benchmarks, yielding more reliable and theoretically verifiable OOD detection.

Technology Category

Application Category

πŸ“ Abstract
Deep neural networks for image classification often exhibit overconfidence on out-of-distribution (OOD) samples. To address this, we introduce Geometrically Constrained Outlier Synthesis (GCOS), a training-time regularization framework aimed at improving OOD robustness during inference. GCOS addresses a limitation of prior synthesis methods by generating virtual outliers in the hidden feature space that respect the learned manifold structure of in-distribution (ID) data. The synthesis proceeds in two stages: (i) a dominant-variance subspace extracted from the training features identifies geometrically informed, off-manifold directions; (ii) a conformally-inspired shell, defined by the empirical quantiles of a nonconformity score from a calibration set, adaptively controls the synthesis magnitude to produce boundary samples. The shell ensures that generated outliers are neither trivially detectable nor indistinguishable from in-distribution data, facilitating smoother learning of robust features. This is combined with a contrastive regularization objective that promotes separability of ID and OOD samples in a chosen score space, such as Mahalanobis or energy-based. Experiments demonstrate that GCOS outperforms state-of-the-art methods using standard energy-based inference on near-OOD benchmarks, defined as tasks where outliers share the same semantic domain as in-distribution data. As an exploratory extension, the framework naturally transitions to conformal OOD inference, which translates uncertainty scores into statistically valid p-values and enables thresholds with formal error guarantees, providing a pathway toward more predictable and reliable OOD detection.
Problem

Research questions and friction points this paper is trying to address.

out-of-distribution detection
overconfidence
near-OOD
robustness
image classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Geometrically Constrained Outlier Synthesis
out-of-distribution detection
manifold-aware synthesis
conformal inference
contrastive regularization
πŸ”Ž Similar Papers
No similar papers found.
Daniil Karzanov
Daniil Karzanov
EPFL
machine learningdeep learningreinforcement learning
M
Marcin Detyniecki
1AXA AI Research; 3TRAIL, Sorbonne UniversitΓ©, Paris, France; 4Polish Academy of Science, IBS PAN, Warsaw, Poland