RobustFSM: Submodular Maximization in Federated Setting with Malicious Clients

๐Ÿ“… 2025-11-03
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Malicious clients in federated learning can inject false information, severely degrading the robustness of submodular maximizationโ€”a critical task for representative subset selection under privacy constraints. Method: This paper proposes the first robust optimization framework specifically designed for federated submodular maximization. It integrates a robust aggregation mechanism with a dynamic client behavior assessment strategy within an iterative federated architecture, enabling secure information fusion and high-fidelity subset selection. The framework employs theoretically grounded outlier detection and weighted aggregation to withstand practical attacks, including Byzantine attacks and gradient poisoning. Results: Extensive experiments on multiple real-world datasets demonstrate that, under strong adversarial conditions, the proposed method improves subset selection quality by up to 200% over baseline federated submodular algorithms and significantly outperforms existing robust aggregation schemes. This work establishes a new paradigm for privacy-preserving, distributed representative sampling.

Technology Category

Application Category

๐Ÿ“ Abstract
Submodular maximization is an optimization problem benefiting many machine learning applications, where we seek a small subset best representing an extremely large dataset. We focus on the federated setting where the data are locally owned by decentralized clients who have their own definitions for the quality of representability. This setting requires repetitive aggregation of local information computed by the clients. While the main motivation is to respect the privacy and autonomy of the clients, the federated setting is vulnerable to client misbehaviors: malicious clients might share fake information. An analogy is backdoor attack in conventional federated learning, but our challenge differs freshly due to the unique characteristics of submodular maximization. We propose RobustFSM, a federated submodular maximization solution that is robust to various practical client attacks. Its performance is substantiated with an empirical evaluation study using real-world datasets. Numerical results show that the solution quality of RobustFSM substantially exceeds that of the conventional federated algorithm when attacks are severe. The degree of this improvement depends on the dataset and attack scenarios, which can be as high as 200%
Problem

Research questions and friction points this paper is trying to address.

Federated submodular maximization vulnerable to malicious clients
RobustFSM defends against client attacks in distributed optimization
Solution improves performance by up to 200% under severe attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated submodular maximization with malicious clients
RobustFSM solution against various client attacks
Empirical evaluation using real-world datasets validation
๐Ÿ”Ž Similar Papers
No similar papers found.
D
Duc-Anh Tran
Department of Computer Science, University of Massachusetts, Boston, USA
D
Dung Truong
Department of Computer Science, University of Massachusetts, Boston, USA
Duy Le
Duy Le
Post & Telecommunication Institute of Technology in Hanoi, Vietnam
Computer ScienceArtificial IntelligenceMachine Learning