Generalized Category Discovery via Reciprocal Learning and Class-Wise Distribution Regularization

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Generalized Category Discovery (GCD) aims to jointly identify both known and unknown classes from unlabeled data containing instances of both. However, existing parametric approaches suffer from degraded discriminability on base classes due to reliance on unreliable self-supervised signals. This paper proposes the Reciprocal Learning Framework (RLF), which jointly refines features and pseudo-labels via mutual feedback between a backbone network and an auxiliary branch. To mitigate base-class bias and enhance confidence in novel classes, we further introduce Class-level Distribution Regularization (CDR), which enforces consistency of class probability distributions using KL divergence. Our method integrates soft-label feedback and pseudo-label distillation without additional computational overhead. Evaluated on seven GCD benchmarks, RLF achieves state-of-the-art performance: it significantly improves novel-class accuracy while preserving base-class discrimination—demonstrating effective joint learning without performance trade-offs.

Technology Category

Application Category

📝 Abstract
Generalized Category Discovery (GCD) aims to identify unlabeled samples by leveraging the base knowledge from labeled ones, where the unlabeled set consists of both base and novel classes. Since clustering methods are time-consuming at inference, parametric-based approaches have become more popular. However, recent parametric-based methods suffer from inferior base discrimination due to unreliable self-supervision. To address this issue, we propose a Reciprocal Learning Framework (RLF) that introduces an auxiliary branch devoted to base classification. During training, the main branch filters the pseudo-base samples to the auxiliary branch. In response, the auxiliary branch provides more reliable soft labels for the main branch, leading to a virtuous cycle. Furthermore, we introduce Class-wise Distribution Regularization (CDR) to mitigate the learning bias towards base classes. CDR essentially increases the prediction confidence of the unlabeled data and boosts the novel class performance. Combined with both components, our proposed method, RLCD, achieves superior performance in all classes with negligible extra computation. Comprehensive experiments across seven GCD datasets validate its superiority. Our codes are available at https://github.com/APORduo/RLCD.
Problem

Research questions and friction points this paper is trying to address.

Identify unlabeled samples using labeled data knowledge
Improve base discrimination in parametric-based GCD methods
Mitigate learning bias towards base classes in GCD
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reciprocal Learning Framework enhances base classification
Class-wise Distribution Regularization reduces learning bias
Combined RLCD method improves all class performances
🔎 Similar Papers
No similar papers found.
D
Duo Liu
School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University
Z
Zhiquan Tan
Department of Mathematical Sciences, Tsinghua University
Linglan Zhao
Linglan Zhao
Shanghai Jiao Tong University
Deep learningFew-shot learningMeta-learning
Zhongqiang Zhang
Zhongqiang Zhang
School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University
X
Xiangzhong Fang
School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University
W
Weiran Huang
School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University; Shanghai Innovation Institute; State Key Laboratory of General Artificial Intelligence, BIGAI