Cooperative Pseudo Labeling for Unsupervised Federated Classification

📅 2025-10-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of effective classification in unsupervised federated learning (UFL), where neither labeled data nor raw data sharing is permitted, by leveraging vision-language models (e.g., CLIP). To this end, we propose FedCoPL—a novel framework featuring two key innovations: (1) a federated collaborative pseudo-labeling mechanism, wherein the server calibrates the distribution of client-uploaded pseudo-labels to mitigate class imbalance; and (2) a partial prompt aggregation protocol that decouples visual and textual prompt optimization—visual prompts are globally aggregated to enhance cross-client collaboration, while textual prompts remain locally updated to preserve client-specific personalization. Extensive experiments on multiple benchmarks demonstrate that FedCoPL significantly outperforms existing UFL methods, achieving substantial gains in classification accuracy. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Unsupervised Federated Learning (UFL) aims to collaboratively train a global model across distributed clients without sharing data or accessing label information. Previous UFL works have predominantly focused on representation learning and clustering tasks. Recently, vision language models (e.g., CLIP) have gained significant attention for their powerful zero-shot prediction capabilities. Leveraging this advancement, classification problems that were previously infeasible under the UFL paradigm now present promising new opportunities, yet remain largely unexplored. In this paper, we extend UFL to the classification problem with CLIP for the first time and propose a novel method, underline{ extbf{Fed}}erated underline{ extbf{Co}}operative underline{ extbf{P}}seudo underline{ extbf{L}}abeling ( extbf{FedCoPL}). Specifically, clients estimate and upload their pseudo label distribution, and the server adjusts and redistributes them to avoid global imbalance among classes. Moreover, we introduce a partial prompt aggregation protocol for effective collaboration and personalization. In particular, visual prompts containing general image features are aggregated at the server, while text prompts encoding personalized knowledge are retained locally. Extensive experiments demonstrate the superior performance of our FedCoPL compared to baseline methods. Our code is available at href{https://github.com/krumpguo/FedCoPL}{https://github.com/krumpguo/FedCoPL}.
Problem

Research questions and friction points this paper is trying to address.

Extends unsupervised federated learning to classification using CLIP
Addresses global class imbalance via pseudo label distribution adjustment
Introduces partial prompt aggregation for collaboration and personalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Cooperative Pseudo Labeling for classification
Distributing adjusted pseudo labels to avoid class imbalance
Partial prompt aggregation for collaboration and personalization
🔎 Similar Papers
No similar papers found.