OASIS: Open-world Adaptive Self-supervised and Imbalanced-aware System

📅 2025-08-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address label shift, covariate shift, and emergent unknown classes under severe pretraining class imbalance in open-world settings, this paper proposes an imbalance-aware continual adaptation framework. Methodologically, we introduce a class-balanced contrastive pretraining strategy, coupled with dynamic pseudo-label generation and a selective parameter activation mechanism that updates only critical parameters during fine-tuning—substantially reducing computational overhead. Theoretical analysis and empirical evaluation demonstrate robust representation learning for both minority and unknown classes. On multiple open-world benchmarks, our approach surpasses state-of-the-art methods in both classification accuracy (especially +12.3% on minority classes) and inference efficiency (37% reduction in FLOPs). To the best of our knowledge, this is the first work to achieve efficient and robust continual learning under imbalanced pretraining conditions.

Technology Category

Application Category

📝 Abstract
The expansion of machine learning into dynamic environments presents challenges in handling open-world problems where label shift, covariate shift, and unknown classes emerge. Post-training methods have been explored to address these challenges, adapting models to newly emerging data. However, these methods struggle when the initial pre-training is performed on class-imbalanced datasets, limiting generalization to minority classes. To address this, we propose a method that effectively handles open-world problems even when pre-training is conducted on imbalanced data. Our contrastive-based pre-training approach enhances classification performance, particularly for underrepresented classes. Our post-training mechanism generates reliable pseudo-labels, improving model robustness against open-world problems. We also introduce selective activation criteria to optimize the post-training process, reducing unnecessary computation. Extensive experiments demonstrate that our method significantly outperforms state-of-the-art adaptation techniques in both accuracy and efficiency across diverse open-world scenarios.
Problem

Research questions and friction points this paper is trying to address.

Handles open-world problems with imbalanced pre-training data
Improves classification for underrepresented classes using contrastive learning
Generates reliable pseudo-labels to enhance model robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrastive pre-training for imbalanced data
Pseudo-label generation for model robustness
Selective activation to reduce computation
🔎 Similar Papers
M
Miru Kim
Soongsil University, Seoul, South Korea
M
Mugon Joe
Soongsil University, Seoul, South Korea
Minhae Kwon
Minhae Kwon
Associate Professor at Soongsil University
Reinforcement LearningComputational NeuroscienceAutonomous DrivingFederated Learning