🤖 AI Summary
This work addresses the challenges of collaborative AI in intelligent shipping, where unstable communications, limited backhaul bandwidth, and commercial sensitivity hinder conventional federated learning due to its reliance on a central server. The authors propose a serverless, gossip-based collaborative learning framework that decouples the control and data planes, enabling dynamic scheduling of participating vessels, communication links, compression ratios, and recovery mechanisms. For the first time, carbon emissions, communication costs, and long-term participation fairness are integrated into the design of the gossip protocol, yielding a resource-aware and sustainable collaborative learning system. Evaluated on a real-world predictive maintenance task for bulk carrier engines, the approach significantly reduces both carbon footprint and communication overhead compared to decentralized baselines while maintaining high model accuracy, leveraging compressed gossip exchanges, adaptive node selection, link activation, and trajectory-driven maritime simulations.
📝 Abstract
Smart shipping operations increasingly depend on collaborative AI, yet the underlying data are generated across vessels with uneven connectivity, limited backhaul, and clear commercial sensitivity. In such settings, server-coordinated FL remains a weak systems assumption, depending on a reachable aggregation point and repeated wide-area synchronization, both of which are difficult to guarantee in maritime networks. A serverless gossip approach therefore represents a more natural approach, but existing methods still treat communication mainly as an optimization bottleneck, rather than as a resource that must be managed jointly with carbon cost, reliability, and long-term participation balance. In this context, this paper presents CARGO, a carbon-aware gossip orchestration framework for smart-shipping. CARGO separates learning into a control and a data plane. The data plane performs local optimization with compressed gossip exchange, while the control plane decides, at each round, which vessels should participate, which communication edges should be activated, how aggressively updates should be compressed, and when recovery actions should be triggered. We evaluate CARGO under a predictive-maintenance scenario using operational bulk-carrier engine data and a trace-driven maritime communication protocol that captures client dropout, partial participation, packet loss, and multiple connectivity regimes, derived from mobility-aware vessel interactions. Across the tested stress settings, CARGO consistently remains in the high-accuracy regime while reducing carbon footprint and communication overheads, compared to accuracy-competitive decentralized baselines. Overall, the conducted performance evaluation demonstrates that CARGO is a feasible and practical solution for reliable and resource-conscious maritime AI deployment.