🤖 AI Summary
In novel class discovery (NCD), the relationship between the number of known classes and recognition performance remains poorly understood. This work establishes a programmatically controllable experimental framework based on the dSprites dataset to systematically investigate how the count of known classes affects NCD performance. Leveraging contrastive representation learning and clustering analysis, we empirically demonstrate—for the first time—that new-class identification accuracy exhibits diminishing marginal returns as the number of known classes increases, with a pronounced saturation point beyond which further annotation yields negligible gains. This finding provides a quantifiable theoretical foundation for balancing annotation cost against performance in NCD, moving beyond heuristic choices of known-class size. Results are consistent across diverse backbone architectures and clustering configurations, confirming the robustness and generalizability of our conclusions.
📝 Abstract
Novel class discovery is essential for ML models to adapt to evolving real-world data, with applications ranging from scientific discovery to robotics. However, these datasets contain complex and entangled factors of variation, making a systematic study of class discovery difficult. As a result, many fundamental questions are yet to be answered on why and when new class discoveries are more likely to be successful. To address this, we propose a simple controlled experimental framework using the dSprites dataset with procedurally generated modifying factors. This allows us to investigate what influences successful class discovery. In particular, we study the relationship between the number of known/unknown classes and discovery performance, as well as the impact of known class 'coverage' on discovering new classes. Our empirical results indicate that the benefit of the number of known classes reaches a saturation point beyond which discovery performance plateaus. The pattern of diminishing return across different settings provides an insight for cost-benefit analysis for practitioners and a starting point for more rigorous future research of class discovery on complex real-world datasets.