🤖 AI Summary
This work addresses the vulnerability of federated learning in resource-constrained, heterogeneous industrial networks, where unreliable clients, sensing noise, and malicious updates undermine system integrity. Existing trust mechanisms often rely on fixed parameters or simplistic adaptive rules, rendering them ill-suited for dynamic environments. To overcome these limitations, we propose a lightweight, server-side proxy-based trust coordination framework that decouples observation, reasoning, and action. Without altering client-side procedures or increasing communication overhead, our approach establishes a closed-loop control mechanism that continuously monitors time-series data of system behavior and trust signals, autonomously infers their evolutionary trends, and precisely adjusts trust policies upon detecting instability. This method significantly enhances the robustness and stability of federated learning in dynamic industrial settings, enabling sustainable and resilient distributed intelligent collaboration.
📝 Abstract
Distributed intelligence in industrial networks increasingly integrates sensing, communication, and computation across heterogeneous and resource constrained devices. Federated learning (FL) enables collaborative model training in such environments, but its reliability is affected by inconsistent client behaviour, noisy sensing conditions, and the presence of faulty or adversarial updates. Trust based mechanisms are commonly used to mitigate these effects, yet most remain statistical and heuristic, relying on fixed parameters or simple adaptive rules that struggle to accommodate changing operating conditions.
This paper presents a lightweight agentic trust coordination approach for FL in sustainable and resilient industrial networks. The proposed Agentic Trust Control Layer operates as a server side control loop that observes trust related and system level signals, interprets their evolution over time, and applies targeted trust adjustments when instability is detected. The approach extends prior adaptive trust mechanisms by enabling context aware intervention decisions, rather than relying on fixed or purely reactive parameter updates. By explicitly separating observation, reasoning, and action, the proposed framework supports stable FL operation without modifying client side training or increasing communication overhead.