Two Teachers Better Than One: Hardware-Physics Co-Guided Distributed Scientific Machine Learning

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In wide-area sensing scenarios, centralized scientific machine learning (SciML) faces significant challenges including high communication latency, excessive energy consumption, and insufficient physical consistency. To address these issues, this work proposes EPIC, a novel distributed SciML framework that integrates dual guidance from hardware constraints and physical laws within an edge–cloud collaborative architecture. In EPIC, edge devices perform lightweight encoding, while a central node executes physics-aware decoding; latent-space feature transmission and cross-attention mechanisms model wavefield coupling among receivers. Experiments on full-waveform inversion demonstrate that, on a platform with five edge nodes and one central node, EPIC reduces communication latency by 8.9× and energy consumption by 33.8× compared to baseline methods, while achieving higher reconstruction accuracy on 8 out of 10 OpenFWI datasets.

Technology Category

Application Category

📝 Abstract
Scientific machine learning (SciML) is increasingly applied to in-field processing, controlling, and monitoring; however, wide-area sensing, real-time demands, and strict energy and reliability constraints make centralized SciML implementation impractical. Most SciML models assume raw data aggregation at a central node, incurring prohibitively high communication latency and energy costs; yet, distributing models developed for general-purpose ML often breaks essential physical principles, resulting in degraded performance. To address these challenges, we introduce EPIC, a hardware- and physics-co-guided distributed SciML framework, using full-waveform inversion (FWI) as a representative task. EPIC performs lightweight local encoding on end devices and physics-aware decoding at a central node. By transmitting compact latent features rather than high-volume raw data and by using cross-attention to capture inter-receiver wavefield coupling, EPIC significantly reduces communication cost while preserving physical fidelity. Evaluated on a distributed testbed with five end devices and one central node, and across 10 datasets from OpenFWI, EPIC reduces latency by 8.9$\times$ and communication energy by 33.8$\times$, while even improving reconstruction fidelity on 8 out of 10 datasets.
Problem

Research questions and friction points this paper is trying to address.

Scientific Machine Learning
Distributed Computing
Communication Efficiency
Physical Consistency
Edge Computing
Innovation

Methods, ideas, or system contributions that make the work stand out.

distributed scientific machine learning
hardware-physics co-design
full-waveform inversion
latent feature transmission
physics-aware decoding
🔎 Similar Papers
No similar papers found.