🤖 AI Summary
Traditional CPU-based physical simulators for interferometric gravitational-wave detector design incur prohibitive computational costs, severely limiting optimization efficiency. Method: This paper proposes a high-fidelity neural-network-based surrogate modeling framework, using Finesse as the ground-truth simulator and integrating automatic differentiation with GPU-accelerated parallelism to construct a differentiable, efficient, and iteratively verifiable training loop enabling gradient-driven inverse design. Contribution/Results: The method achieves superior designs—exceeding those obtained by conventional optimization after five days—in under two hours, reducing computation time by over 95% and dramatically expanding the exploratory design space. Its core innovation lies in unifying physics fidelity, differentiability, and hardware acceleration within a single surrogate modeling paradigm, establishing a novel pathway for rapid co-optimization of complex, precision optical systems.
📝 Abstract
Physics simulators are essential in science and engineering, enabling the analysis, control, and design of complex systems. In experimental sciences, they are increasingly used to automate experimental design, often via combinatorial search and optimization. However, as the setups grow more complex, the computational cost of traditional, CPU-based simulators becomes a major limitation. Here, we show how neural surrogate models can significantly reduce reliance on such slow simulators while preserving accuracy. Taking the design of interferometric gravitational wave detectors as a representative example, we train a neural network to surrogate the gravitational wave physics simulator Finesse, which was developed by the LIGO community. Despite that small changes in physical parameters can change the output by orders of magnitudes, the model rapidly predicts the quality and feasibility of candidate designs, allowing an efficient exploration of large design spaces. Our algorithm loops between training the surrogate, inverse designing new experiments, and verifying their properties with the slow simulator for further training. Assisted by auto-differentiation and GPU parallelism, our method proposes high-quality experiments much faster than direct optimization. Solutions that our algorithm finds within hours outperform designs that take five days for the optimizer to reach. Though shown in the context of gravitational wave detectors, our framework is broadly applicable to other domains where simulator bottlenecks hinder optimization and discovery.