Boosting Open Set Recognition Performance through Modulated Representation Learning

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Open-set recognition (OSR) faces a fundamental challenge in jointly optimizing instance-level and semantic-level representation learning. Existing approaches rely on fixed-temperature logit scaling, which constrains comprehensive exploration of the feature spectrum. To address this, we propose a temperature-modulated representation learning framework, introducing the first negative cosine temperature scheduling strategy. This strategy dynamically adjusts neighborhood sensitivity without incurring additional computational cost, requiring neither auxiliary negative samples nor explicit regularization terms, and progressively refines decision boundaries. Our method jointly optimizes cross-entropy loss and contrastive learning loss, achieving significant improvements in both open-set and closed-set recognition performance across standard OSR benchmarks and challenging semantic-shift scenarios. Importantly, the proposed mechanism is modular and plug-and-play, seamlessly integrating with diverse mainstream OSR frameworks while preserving their architectural integrity.

Technology Category

Application Category

📝 Abstract
The open set recognition (OSR) problem aims to identify test samples from novel semantic classes that are not part of the training classes, a task that is crucial in many practical scenarios. However, existing OSR methods use a constant scaling factor (the temperature) to the logits before applying a loss function, which hinders the model from exploring both ends of the spectrum in representation learning -- from instance-level to semantic-level features. In this paper, we address this problem by enabling temperature-modulated representation learning using our novel negative cosine scheduling scheme. Our scheduling lets the model form a coarse decision boundary at the beginning of training by focusing on fewer neighbors, and gradually prioritizes more neighbors to smooth out rough edges. This gradual task switching leads to a richer and more generalizable representation space. While other OSR methods benefit by including regularization or auxiliary negative samples, such as with mix-up, thereby adding a significant computational overhead, our scheme can be folded into any existing OSR method with no overhead. We implement the proposed scheme on top of a number of baselines, using both cross-entropy and contrastive loss functions as well as a few other OSR methods, and find that our scheme boosts both the OSR performance and the closed set performance in most cases, especially on the tougher semantic shift benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Identifying novel class samples not in training data
Improving representation learning with modulated temperature scaling
Enhancing OSR performance without computational overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

Temperature-modulated representation learning via negative cosine scheduling
Gradual task switching for richer representation space
No computational overhead in existing OSR methods
Amit Kumar Kundu
Amit Kumar Kundu
Electrical and Computer Engineering, University of Maryland College Park
Machine LearningRepresentation Learning
V
Vaishnavi Patil
Department of Computer Science, University of Maryland, College Park, MD 20742
J
Joseph Jaja
Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742