🤖 AI Summary
Charged particle track reconstruction constitutes a core computational bottleneck in high-energy physics experiments. Existing graph neural network (GNN)-based approaches suffer from high-cost graph construction and irregular memory access patterns, resulting in low throughput. Although hash-based efficient particle tracking (HEPT) leverages locality-sensitive hashing (LSH) for near-linear complexity, it relies on post-hoc clustering (e.g., DBSCAN), hindering end-to-end differentiability and optimization. This paper introduces HEPTv2: a hardware-friendly LSH-accelerated point cloud transformer featuring a learnable lightweight decoder that directly outputs trajectory clusters—eliminating all post-processing steps. Evaluated on the TrackML dataset, HEPTv2 achieves inference latency of only 28 ms per event on a single NVIDIA A100 GPU while maintaining state-of-the-art physical reconstruction accuracy. To our knowledge, HEPTv2 is the first method to simultaneously deliver ultra-high-speed, end-to-end trainable, and high-fidelity track reconstruction.
📝 Abstract
Charged particle track reconstruction is a foundational task in collider experiments and the main computational bottleneck in particle reconstruction. Graph neural networks (GNNs) have shown strong performance for this problem, but costly graph construction, irregular computations, and random memory access patterns substantially limit their throughput. The recently proposed Hashing-based Efficient Point Transformer (HEPT) offers a theoretically guaranteed near-linear complexity for large point cloud processing via locality-sensitive hashing (LSH) in attention computations; however, its evaluations have largely focused on embedding quality, and the object condensation pipeline on which HEPT relies requires a post-hoc clustering step (e.g., DBScan) that can dominate runtime. In this work, we make two contributions. First, we present a unified, fair evaluation of physics tracking performance for HEPT and a representative GNN-based pipeline under the same dataset and metrics. Second, we introduce HEPTv2 by extending HEPT with a lightweight decoder that eliminates the clustering stage and directly predicts track assignments. This modification preserves HEPT's regular, hardware-friendly computations while enabling ultra-fast end-to-end inference. On the TrackML dataset, optimized HEPTv2 achieves approximately 28 ms per event on an A100 while maintaining competitive tracking efficiency. These results position HEPTv2 as a practical, scalable alternative to GNN-based pipelines for fast tracking.