Can Data-Driven Dynamics Reveal Hidden Physics? There Is A Need for Interpretable Neural Operators

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the interpretability and physical consistency of neural operators—specifically, how they implicitly learn governing physical laws and how prior physical knowledge can be systematically integrated to enhance generalization and scientific discovery. We propose a unified “functional–spatial domain” dual-domain classification framework to characterize the intrinsic dynamical modeling pathways of neural operators. Methodologically, we develop a synergistic approach combining multi-scale dual-space modeling, functional basis expansion, and grid-aware learning, enabling joint data-driven prediction and physical structure identification. Experiments demonstrate that lightweight dual-space models achieve state-of-the-art performance, confirming that neural operators intrinsically capture PDE structures. Our core contribution is the first principled interpretability framework for neural operators tailored to physics discovery, providing both theoretical foundations and practical paradigms for trustworthy scientific machine learning.

Technology Category

Application Category

📝 Abstract
Recently, neural operators have emerged as powerful tools for learning mappings between function spaces, enabling data-driven simulations of complex dynamics. Despite their successes, a deeper understanding of their learning mechanisms remains underexplored. In this work, we classify neural operators into two types: (1) Spatial domain models that learn on grids and (2) Functional domain models that learn with function bases. We present several viewpoints based on this classification and focus on learning data-driven dynamics adhering to physical principles. Specifically, we provide a way to explain the prediction-making process of neural operators and show that neural operator can learn hidden physical patterns from data. However, this explanation method is limited to specific situations, highlighting the urgent need for generalizable explanation methods. Next, we show that a simple dual-space multi-scale model can achieve SOTA performance and we believe that dual-space multi-spatio-scale models hold significant potential to learn complex physics and require further investigation. Lastly, we discuss the critical need for principled frameworks to incorporate known physics into neural operators, enabling better generalization and uncovering more hidden physical phenomena.
Problem

Research questions and friction points this paper is trying to address.

Interpreting neural operators' learning mechanisms for dynamics
Developing generalizable explanation methods for hidden physics
Incorporating physical principles into neural operator frameworks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Classifying neural operators into spatial and functional domains
Developing dual-space multi-scale models for complex physics
Integrating physics principles into neural operator frameworks