🤖 AI Summary
Standard attention mechanisms in Transformer-based neural operators incur quadratic computational complexity, hindering scalability for PDE solving. Method: This work identifies that the Physics-Attention in Transolver is a special case of linear attention and reveals that its performance gain stems from slicing and unslicing operations—not inter-slice interactions. Leveraging this insight, we propose a two-step transformation to unify it with standard linear attention, yielding LinearNO: a lightweight, efficient neural operator built upon a slicing-projection → linear-attention → unslicing-reconstruction paradigm within the Transformer framework. Contribution/Results: LinearNO achieves state-of-the-art performance on six canonical PDE benchmarks, reducing parameters by 40.0% and computational cost by 36.2% on average. It also significantly outperforms existing methods on industrial datasets—AirfRANS and Shape-Net Car—demonstrating strong generalization and practical efficacy.
📝 Abstract
Recent advances in Transformer-based Neural Operators have enabled significant progress in data-driven solvers for Partial Differential Equations (PDEs). Most current research has focused on reducing the quadratic complexity of attention to address the resulting low training and inference efficiency. Among these works, Transolver stands out as a representative method that introduces Physics-Attention to reduce computational costs. Physics-Attention projects grid points into slices for slice attention, then maps them back through deslicing. However, we observe that Physics-Attention can be reformulated as a special case of linear attention, and that the slice attention may even hurt the model performance. Based on these observations, we argue that its effectiveness primarily arises from the slice and deslice operations rather than interactions between slices. Building on this insight, we propose a two-step transformation to redesign Physics-Attention into a canonical linear attention, which we call Linear Attention Neural Operator (LinearNO). Our method achieves state-of-the-art performance on six standard PDE benchmarks, while reducing the number of parameters by an average of 40.0% and computational cost by 36.2%. Additionally, it delivers superior performance on two challenging, industrial-level datasets: AirfRANS and Shape-Net Car.