🤖 AI Summary
To address resolution degradation and insufficient conditional generation capability in modeling raw pixel-time sequences for the Electron-Ion Collider (EIC) imaging Cherenkov detectors, this work introduces the first domain-specific foundation model tailored for nuclear physics experiments. We propose a novel dual-vocabulary architecture—discrete spatial tokens and continuous kinematic variables—and integrate causal multi-head cross-attention (CMHCA) for cross-modal temporal fusion. Further, we combine a split VQ-VAE with dilation-free continuous tokenization to enable high-fidelity sequence generation and context-aware conditional modeling. Closed-loop evaluation on High Performance DIRC data validates generation fidelity. When transferred to pion/kaon identification, fine-tuning achieves state-of-the-art reconstruction performance, significantly enhancing semantic representation and physics-informed reasoning from low-level detector data.
📝 Abstract
We present a (proto) Foundation Model for Nuclear Physics, capable of operating on low-level detector inputs from Imaging Cherenkov Detectors at the future Electron Ion Collider. To address limitations in existing next-token prediction approaches-namely resolution loss from VQ-VAE tokenization and lack of conditional generation-we propose three key innovations: (i) separate vocabularies for discrete spatial features and continuous variates, combined via Causal Multi-Head Cross-Attention (CMHCA), (ii) continuous kinematic conditioning through prepended context embeddings, and (iii) scalable and simple, high-resolution continuous variate tokenization without joint vocabulary inflation. Our model enables fast, high-fidelity generation of pixel and time sequences for Cherenkov photons, validated through closure tests in the High Performance DIRC. We also show our model generalizes to reconstruction tasks such as pion and kaon identification, in which we show its ability to leverage fine-tuning.