🤖 AI Summary
To address the challenge of particle trajectory matching and classification in sparse, dual-view (XZ/YZ) images from the NOvA experiment, this work proposes the Heterogeneous Point-set Transformer—a novel architecture that introduces point-set transformers to particle physics detection for the first time, enabling joint cross-view modeling. Leveraging a sparse matrix-based design, the model employs attention mechanisms to jointly encode geometric and semantic features from both views, overcoming information fragmentation and excessive memory consumption inherent in conventional CNNs that process views independently. Evaluated on real NOvA data, the method achieves an AUC of 96.8%, outperforming independent-view CNNs by 11.4% while reducing memory usage by over 90%. The approach delivers high accuracy, low computational resource requirements, and strong generalization—establishing a new paradigm for segmentation of multi-view sparse detector data.
📝 Abstract
NOvA is a long-baseline neutrino oscillation experiment that detects neutrino particles from the NuMI beam at Fermilab. Before data from this experiment can be used in analyses, raw hits in the detector must be matched to their source particles, and the type of each particle must be identified. This task has commonly been done using a mix of traditional clustering approaches and convolutional neural networks (CNNs). Due to the construction of the detector, the data is presented as two sparse 2D images: an XZ and a YZ view of the detector, rather than a 3D representation. We propose a point set neural network that operates on the sparse matrices with an operation that mixes information from both views. Our model uses less than 10% of the memory required using previous methods while achieving a 96.8% AUC score, a higher score than obtained when both views are processed independently (85.4%).