🤖 AI Summary
In wide-area sensing scenarios, centralized scientific machine learning (SciML) faces significant challenges including high communication latency, excessive energy consumption, and insufficient physical consistency. To address these issues, this work proposes EPIC, a novel distributed SciML framework that integrates dual guidance from hardware constraints and physical laws within an edge–cloud collaborative architecture. In EPIC, edge devices perform lightweight encoding, while a central node executes physics-aware decoding; latent-space feature transmission and cross-attention mechanisms model wavefield coupling among receivers. Experiments on full-waveform inversion demonstrate that, on a platform with five edge nodes and one central node, EPIC reduces communication latency by 8.9× and energy consumption by 33.8× compared to baseline methods, while achieving higher reconstruction accuracy on 8 out of 10 OpenFWI datasets.
📝 Abstract
Scientific machine learning (SciML) is increasingly applied to in-field processing, controlling, and monitoring; however, wide-area sensing, real-time demands, and strict energy and reliability constraints make centralized SciML implementation impractical. Most SciML models assume raw data aggregation at a central node, incurring prohibitively high communication latency and energy costs; yet, distributing models developed for general-purpose ML often breaks essential physical principles, resulting in degraded performance. To address these challenges, we introduce EPIC, a hardware- and physics-co-guided distributed SciML framework, using full-waveform inversion (FWI) as a representative task. EPIC performs lightweight local encoding on end devices and physics-aware decoding at a central node. By transmitting compact latent features rather than high-volume raw data and by using cross-attention to capture inter-receiver wavefield coupling, EPIC significantly reduces communication cost while preserving physical fidelity. Evaluated on a distributed testbed with five end devices and one central node, and across 10 datasets from OpenFWI, EPIC reduces latency by 8.9$\times$ and communication energy by 33.8$\times$, while even improving reconstruction fidelity on 8 out of 10 datasets.