🤖 AI Summary
To address robust out-of-distribution (OOD) uncertainty rejection without OOD supervision or modality-specific augmentation, this paper proposes a general-purpose framework. Methodologically, it introduces three key innovations: (1) latent-space expansion-based data augmentation, explicitly generating high-quality OOD samples; (2) a shared-MLP multi-head expert architecture that jointly models predictive diversity and uncertainty; and (3) an empirical trial-and-error mechanism for dynamic pseudo-label filtering to enhance rejection reliability. Evaluated on tabular data, the method consistently outperforms state-of-the-art approaches—achieving significant improvements in both OOD detection accuracy and prediction confidence calibration. These results demonstrate strong generalization capability and practical applicability across diverse OOD scenarios, while eliminating reliance on labeled OOD data or domain-specific augmentations.
📝 Abstract
Expansive Matching of Experts (EMOE) is a novel method that utilizes support-expanding, extrapolatory pseudo-labeling to improve prediction and uncertainty based rejection on out-of-distribution (OOD) points. We propose an expansive data augmentation technique that generates OOD instances in a latent space, and an empirical trial based approach to filter out augmented expansive points for pseudo-labeling. EMOE utilizes a diverse set of multiple base experts as pseudo-labelers on the augmented data to improve OOD performance through a shared MLP with multiple heads (one per expert). We demonstrate that EMOE achieves superior performance compared to state-of-the-art methods on tabular data.