A Contrastive Teacher-Student Framework for Novelty Detection under Style Shifts

📅 2025-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Environmental style shifts degrade novelty detection performance, as models trained without out-of-distribution (OOD) samples conflate stylistic and semantic features. Method: We propose a style-aware auxiliary OOD data construction mechanism that synthesizes samples with style similarity but semantic dissimilarity; further, we design a core-feature-guided contrastive teacher-student distillation framework that explicitly disentangles semantic from style features via task-driven knowledge transfer. Contribution/Results: This is the first work to jointly integrate style-controllable augmentation, contrastive learning, and knowledge distillation for robust novelty detection. Evaluated on multiple synthetic and real-world benchmarks, our method consistently outperforms nine state-of-the-art approaches, achieving an average 12.6% AUROC improvement under style shift. It effectively mitigates reliance on style shortcuts and enhances cross-style generalization.

Technology Category

Application Category

📝 Abstract
There have been several efforts to improve Novelty Detection (ND) performance. However, ND methods often suffer significant performance drops under minor distribution shifts caused by changes in the environment, known as style shifts. This challenge arises from the ND setup, where the absence of out-of-distribution (OOD) samples during training causes the detector to be biased toward the dominant style features in the in-distribution (ID) data. As a result, the model mistakenly learns to correlate style with core features, using this shortcut for detection. Robust ND is crucial for real-world applications like autonomous driving and medical imaging, where test samples may have different styles than the training data. Motivated by this, we propose a robust ND method that crafts an auxiliary OOD set with style features similar to the ID set but with different core features. Then, a task-based knowledge distillation strategy is utilized to distinguish core features from style features and help our model rely on core features for discriminating crafted OOD and ID sets. We verified the effectiveness of our method through extensive experimental evaluations on several datasets, including synthetic and real-world benchmarks, against nine different ND methods.
Problem

Research questions and friction points this paper is trying to address.

Novelty Detection
Environmental Variability
Style-dependent Bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

Teacher-Student Contrastive Framework
Style-Invariant Novelty Detection
Adaptive Teaching Strategy
🔎 Similar Papers
No similar papers found.
Hossein Mirzaei
Hossein Mirzaei
PhD student @ Mackenzie Mathis Lab
Machine Learning
Mojtaba Nafez
Mojtaba Nafez
Master's Student, Department of Computer Engineering, Sharif University of Technology
Machine Learning
M
Moein Madadi
Sharif University of Technology, Iran
A
Arad Maleki
Sharif University of Technology, Iran
M
Mahdi Hajialilue
Sharif University of Technology, Iran
Z
Z. Taghavi
Ludwig-Maximilians-Universität München (LMU), Germany
S
Sepehr Rezaee
Shahid Beheshti University, Iran
Ali Ansari
Ali Ansari
PhD student at Temple university
NLPData MiningVLM
B
Bahar Dibaei Nia
Sharif University of Technology, Iran
K
Kian Shamsaie
Sharif University of Technology, Iran
M
Mohammadreza Salehi
Sharif University of Technology, Iran
Mackenzie W. Mathis
Mackenzie W. Mathis
Swiss Federal Institute of Technology in Lausanne (EPFL)
Systems NeuroscienceSensorimotor ControlComputer VisionMachine Learning
M
M. Baghshah
Ludwig-Maximilians-Universität München (LMU), Germany
Mohammad Sabokrou
Mohammad Sabokrou
Okinawa Institute of Science and Technology
Machine LearningComputer VisionTrustworthy AI
M
M. H. Rohban
Sharif University of Technology, Iran