Evidential Trust-Aware Model Personalization in Decentralized Federated Learning for Wearable IoT

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of personalized modeling and trustworthy collaboration in decentralized federated learning (DFL) arising from statistical heterogeneity among edge devices, this paper proposes Murmura—the first trust-aware personalized framework that leverages cognitive uncertainty (derived from evidential deep learning) as a node compatibility criterion. Murmura employs Dirichlet-output modeling, integrates local cross-validation with adaptive weighted aggregation, and enables automatic distribution mismatch detection and dynamic threshold-based aggregation over decentralized graph-structured communication. Evaluated under non-IID settings on UCI HAR, PAMAP2, and PPG-DaLiA datasets, Murmura reduces performance degradation to only 0.9%—a substantial improvement over the baseline’s 19.3%. It accelerates convergence by 7.4× and demonstrates strong robustness to hyperparameter variations.

Technology Category

Application Category

📝 Abstract
Decentralized federated learning (DFL) enables collaborative model training across edge devices without centralized coordination, offering resilience against single points of failure. However, statistical heterogeneity arising from non-identically distributed local data creates a fundamental challenge: nodes must learn personalized models adapted to their local distributions while selectively collaborating with compatible peers. Existing approaches either enforce a single global model that fits no one well, or rely on heuristic peer selection mechanisms that cannot distinguish between peers with genuinely incompatible data distributions and those with valuable complementary knowledge. We present Murmura, a framework that leverages evidential deep learning to enable trust-aware model personalization in DFL. Our key insight is that epistemic uncertainty from Dirichlet-based evidential models directly indicates peer compatibility: high epistemic uncertainty when a peer's model evaluates local data reveals distributional mismatch, enabling nodes to exclude incompatible influence while maintaining personalized models through selective collaboration. Murmura introduces a trust-aware aggregation mechanism that computes peer compatibility scores through cross-evaluation on local validation samples and personalizes model aggregation based on evidential trust with adaptive thresholds. Evaluation on three wearable IoT datasets (UCI HAR, PAMAP2, PPG-DaLiA) demonstrates that Murmura reduces performance degradation from IID to non-IID conditions compared to baseline (0.9% vs. 19.3%), achieves 7.4$ imes$ faster convergence, and maintains stable accuracy across hyperparameter choices. These results establish evidential uncertainty as a principled foundation for compatibility-aware personalization in decentralized heterogeneous environments.
Problem

Research questions and friction points this paper is trying to address.

Address statistical heterogeneity in decentralized federated learning across wearable IoT devices
Enable trust-aware model personalization by distinguishing compatible from incompatible peers
Replace heuristic peer selection with principled evidential uncertainty for compatibility assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses evidential deep learning for trust-aware model personalization
Computes peer compatibility via cross-evaluation on local validation data
Personalizes aggregation with adaptive thresholds based on evidential trust
🔎 Similar Papers
No similar papers found.