Local K-Similarity Constraint for Federated Learning with Label Noise

📅 2025-11-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning (FL) faces severe model contamination in label-noisy client-dominated settings, where existing robust methods either require a large number of clean clients or centralized pretraining—compromising both robustness and communication efficiency. To address this, we propose an architecture-agnostic local regularization framework that decouples pretrained representations from task-specific classifiers. Leveraging self-supervised learning, it constructs intra-client K-nearest-neighbor similarity constraints over local samples and jointly optimizes these with the downstream classification objective. Crucially, no model architecture or pretraining weights are shared across clients, drastically reducing communication overhead. Evaluated on multiple computer vision and medical imaging benchmarks, our method consistently outperforms state-of-the-art FL algorithms under high label noise. It effectively mitigates the adverse impact of noisy clients on the global model, simultaneously alleviating the dual bottlenecks of dependency on clean clients and excessive communication costs.

Technology Category

Application Category

📝 Abstract
Federated learning on clients with noisy labels is a challenging problem, as such clients can infiltrate the global model, impacting the overall generalizability of the system. Existing methods proposed to handle noisy clients assume that a sufficient number of clients with clean labels are available, which can be leveraged to learn a robust global model while dampening the impact of noisy clients. This assumption fails when a high number of heterogeneous clients contain noisy labels, making the existing approaches ineffective. In such scenarios, it is important to locally regularize the clients before communication with the global model, to ensure the global model isn't corrupted by noisy clients. While pre-trained self-supervised models can be effective for local regularization, existing centralized approaches relying on pretrained initialization are impractical in a federated setting due to the potentially large size of these models, which increases communication costs. In that line, we propose a regularization objective for client models that decouples the pre-trained and classification models by enforcing similarity between close data points within the client. We leverage the representation space of a self-supervised pretrained model to evaluate the closeness among examples. This regularization, when applied with the standard objective function for the downstream task in standard noisy federated settings, significantly improves performance, outperforming existing state-of-the-art federated methods in multiple computer vision and medical image classification benchmarks. Unlike other techniques that rely on self-supervised pretrained initialization, our method does not require the pretrained model and classifier backbone to share the same architecture, making it architecture-agnostic.
Problem

Research questions and friction points this paper is trying to address.

Addressing federated learning challenges with widespread label noise across clients
Proposing local regularization without requiring shared pretrained model architecture
Improving global model robustness against noisy clients in heterogeneous settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Local regularization using K-similarity constraint
Decouples pretrained and classification model architectures
Leverages self-supervised representations without model sharing
🔎 Similar Papers
No similar papers found.