Do Two AI Scientists Agree?

šŸ“… 2025-04-03
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
Whether multiple AI models converge to the same physical theory when solving identical scientific tasks remains an open question in AI-driven scientific discovery. Method: We train multiple Hamiltonian–Lagrangian Neural Networks (HLNNs) on classical mechanical systems, employing multi-random-seed aggregation and physics-informed modeling, and propose the interpretability framework MASS for systematic theoretical analysis. Results: (1) AI-discovered theories evolve historically, exhibiting spontaneous transitions from Hamiltonian to Lagrangian formulations; (2) increasing model ensemble size yields multimodal theory distributions—not unimodal convergence—revealing intrinsic ā€œtheoretical diversityā€; (3) seed dependence is quantitatively shown to dominate theory emergence and decay, with validation extended to high-dimensional systems. This work establishes the first measurable, interpretable assessment of physical theory consistency across AI models, enabling rigorous evaluation of theoretical robustness and reproducibility in AI-augmented physics.

Technology Category

Application Category

šŸ“ Abstract
When two AI models are trained on the same scientific task, do they learn the same theory or two different theories? Throughout history of science, we have witnessed the rise and fall of theories driven by experimental validation or falsification: many theories may co-exist when experimental data is lacking, but the space of survived theories become more constrained with more experimental data becoming available. We show the same story is true for AI scientists. With increasingly more systems provided in training data, AI scientists tend to converge in the theories they learned, although sometimes they form distinct groups corresponding to different theories. To mechanistically interpret what theories AI scientists learn and quantify their agreement, we propose MASS, Hamiltonian-Lagrangian neural networks as AI Scientists, trained on standard problems in physics, aggregating training results across many seeds simulating the different configurations of AI scientists. Our findings suggests for AI scientists switch from learning a Hamiltonian theory in simple setups to a Lagrangian formulation when more complex systems are introduced. We also observe strong seed dependence of the training dynamics and final learned weights, controlling the rise and fall of relevant theories. We finally demonstrate that not only can our neural networks aid interpretability, it can also be applied to higher dimensional problems.
Problem

Research questions and friction points this paper is trying to address.

Investigates whether AI models learn same or different theories from same task
Proposes MASS to interpret and quantify agreement in AI-learned theories
Examines theory convergence in AI scientists with varying training data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hamiltonian-Lagrangian neural networks as AI Scientists
Aggregating training results across many seeds
Interpretability and application to higher dimensions
X
Xinghong Fu
Department of Physics, Institute of Artificial Intelligence and Fundamental Interactions, Massachusetts Institute of Technology, Cambridge, USA
Z
Ziming Liu
Department of Physics, Institute of Artificial Intelligence and Fundamental Interactions, Massachusetts Institute of Technology, Cambridge, USA
Max Tegmark
Max Tegmark
Professor of Physics, MIT
Physics