Selective Prior Synchronization via SYNC Loss

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of unreliable selective prediction in deep neural networks under high uncertainty by introducing SYNC, a novel loss function that explicitly incorporates a posterior selection mechanism—such as softmax response—during training to generate selective priors. Unlike prior approaches, SYNC enables end-to-end joint optimization between front-end selective strategies (e.g., SelectiveNet) and back-end decision policies, effectively aligning selection and prediction objectives from the outset. Evaluated on multiple benchmarks including CIFAR-100, ImageNet-100, and Stanford Cars, the proposed method achieves new state-of-the-art performance in selective prediction while simultaneously enhancing model generalization.

Technology Category

Application Category

📝 Abstract
Prediction under uncertainty is a critical requirement for the deep neural network to succeed responsibly. This paper focuses on selective prediction, which allows DNNs to make informed decisions about when to predict or abstain based on the uncertainty level of their predictions. Current methods are either ad-hoc such as SelectiveNet, focusing on how to modify the network architecture or objective function, or post-hoc such as softmax response, achieving selective prediction through analyzing the model's probabilistic outputs. We observe that post-hoc methods implicitly generate uncertainty information, termed the selective prior, which has traditionally been used only during inference. We argue that the selective prior provided by the selection mechanism is equally vital during the training stage. Therefore, we propose the SYNC loss which introduces a novel integration of ad-hoc and post-hoc method. Specifically, our approach incorporates the softmax response into the training process of SelectiveNet, enhancing its selective prediction capabilities by examining the selective prior. Evaluated across various datasets, including CIFAR-100, ImageNet-100, and Stanford Cars, our method not only enhances the model's generalization capabilities but also surpasses previous works in selective prediction performance, and sets new benchmarks for state-of-the-art performance.
Problem

Research questions and friction points this paper is trying to address.

selective prediction
uncertainty
selective prior
deep neural networks
abstention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selective Prediction
Selective Prior
SYNC Loss
Uncertainty Estimation
Deep Neural Networks
🔎 Similar Papers
No similar papers found.