Aequa: Fair Model Rewards in Collaborative Learning via Slimmable Networks

📅 2025-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address unfair reward allocation in collaborative learning, this paper proposes a contribution-driven fair allocation framework based on slimmable neural networks (SNNs). Methodologically, it constructs a shared global model with adjustable width and smoothly degraded performance as width decreases; then designs a post-training width-allocation algorithm that dynamically assigns personalized submodels—each with an optimal width—according to each participant’s contribution. The key innovation lies in the first integration of SNNs into collaborative learning incentive mechanisms, establishing a theoretically grounded mapping between model width and contribution, thereby enabling fine-grained, provably convergent fairness guarantees. Extensive experiments across multiple datasets and architectures validate the consistency between fairness of allocation and model utility. Theoretical analysis ensures algorithmic convergence and supports real-time reward distribution during training.

Technology Category

Application Category

📝 Abstract
Collaborative learning enables multiple participants to learn a single global model by exchanging focused updates instead of sharing data. One of the core challenges in collaborative learning is ensuring that participants are rewarded fairly for their contributions, which entails two key sub-problems: contribution assessment and reward allocation. This work focuses on fair reward allocation, where the participants are incentivized through model rewards - differentiated final models whose performance is commensurate with the contribution. In this work, we leverage the concept of slimmable neural networks to collaboratively learn a shared global model whose performance degrades gracefully with a reduction in model width. We also propose a post-training fair allocation algorithm that determines the model width for each participant based on their contributions. We theoretically study the convergence of our proposed approach and empirically validate it using extensive experiments on different datasets and architectures. We also extend our approach to enable training-time model reward allocation.
Problem

Research questions and friction points this paper is trying to address.

Fair reward allocation in collaborative learning
Slimmable networks for model differentiation
Contribution-based model width determination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Slimmable neural networks
Post-training fair allocation
Training-time model reward
🔎 Similar Papers
No similar papers found.