🤖 AI Summary
Standard attention mechanisms in Transformers incur O(N²) computational complexity, leading to inefficiency and limited receptive fields when processing large-scale physical point clouds. To address this, we propose Native Sparse Attention (NSA), a sparse attention mechanism specifically designed for unstructured 3D point clouds, integrated into the Erwin hierarchical architecture. NSA employs a learnable, hierarchical sparsification strategy that preserves local geometric structure while reducing attention complexity to near-linear time, thereby significantly expanding the effective receptive field. Experiments on three scientific datasets—cosmological simulations, molecular dynamics, and atmospheric pressure modeling—demonstrate that our method matches or surpasses the original Erwin’s prediction accuracy, while substantially reducing GPU memory consumption and training time, and strictly reproducing baseline results. Our core contribution is the first systematic extension of trainable sparse attention to unordered 3D point clouds, establishing an efficient and scalable paradigm for modeling high-dimensional physical systems in scientific AI.
📝 Abstract
Unlocking the potential of transformers on datasets of large physical systems depends on overcoming the quadratic scaling of the attention mechanism. This work explores combining the Erwin architecture with the Native Sparse Attention (NSA) mechanism to improve the efficiency and receptive field of transformer models for large-scale physical systems, addressing the challenge of quadratic attention complexity. We adapt the NSA mechanism for non-sequential data, implement the Erwin NSA model, and evaluate it on three datasets from the physical sciences -- cosmology simulations, molecular dynamics, and air pressure modeling -- achieving performance that matches or exceeds that of the original Erwin model. Additionally, we reproduce the experimental results from the Erwin paper to validate their implementation.