🤖 AI Summary
Physics-informed neural networks (PINNs) often suffer from slow convergence and poor stability when solving partial differential equations on irregular domains, primarily due to normalization mismatch, ineffective boundary enforcement, and imbalanced loss terms. Existing coordinate-mapping approaches rely on manually generated meshes and simple geometries, limiting generalizability and integration into end-to-end learning frameworks. This paper proposes JacobiNet: an end-to-end differentiable coordinate transformation framework that employs a lightweight MLP to learn a smooth, invertible mapping from complex physical domains to a unit reference domain. Crucially, it leverages automatic differentiation to implicitly compute the Jacobian determinant—eliminating the need for mesh generation or explicit derivative construction. JacobiNet further supports hard boundary constraints and adaptive loss balancing. Experiments demonstrate significant improvements: relative L² errors reduce to 0.013–0.039 (average 18.3× reduction), vascular-domain accuracy increases by 3.65×, and computational time decreases by over 10× compared to baseline PINNs.
📝 Abstract
Physics-Informed Neural Networks (PINNs) are effective for solving PDEs by incorporating physical laws into the learning process. However, they face challenges with irregular boundaries, leading to instability and slow convergence due to inconsistent normalization, inaccurate boundary enforcement, and imbalanced loss terms. A common solution is to map the domain to a regular space, but traditional methods rely on case-specific meshes and simple geometries, limiting their compatibility with modern frameworks. To overcome these limitations, we introduce JacobiNet, a neural network-based coordinate transformation method that learns continuous, differentiable mappings from supervised point pairs. Utilizing lightweight MLPs, JacobiNet allows for direct Jacobian computation via autograd and integrates seamlessly with downstream PINNs, enabling end-to-end differentiable PDE solving without the need for meshing or explicit Jacobian computation. JacobiNet effectively addresses normalization challenges, facilitates hard constraints of boundary conditions, and mitigates the long-standing imbalance among loss terms. It demonstrates significant improvements, reducing the relative L2 error from 0.287-0.637 to 0.013-0.039, achieving an average accuracy improvement of 18.3*. In vessel-like domains, it enables rapid mapping for unseen geometries, improving prediction accuracy by 3.65* and achieving over 10* speedup, showcasing its generalization, accuracy, and efficiency.