🤖 AI Summary
This work addresses the escalating computational cost of traditional Monte Carlo detector simulations in high-energy physics experiments driven by increasing luminosity, necessitating efficient and generalizable alternatives. The authors propose a calorimeter foundation model based on the next-token prediction paradigm, leveraging a Transformer architecture augmented with Mixture-of-Experts (MoE) pretraining and parameter-efficient fine-tuning techniques such as LoRA. A modular vocabulary is introduced to support modeling of multiple particle types. The approach enables catastrophic-forgetting-free incremental scaling across diverse materials, particle species, and detector configurations, generating high-fidelity electromagnetic showers across various absorber materials. Its computational efficiency matches that of existing generative models, demonstrating strong feasibility and generalization capability for simulation tasks in high-energy physics.
📝 Abstract
Modern particle physics experiments face an increasing demand for high-fidelity detector simulation as luminosities rise and computational requirements approach the limits of available resources. Deep generative models have emerged as promising surrogates for traditional Monte Carlo simulation, with recent advances drawing inspiration from large language models (LLM) and next-token prediction paradigms. In this work, we introduce a generalizable foundation model for calorimetry built on next-token transformer backbones, designed to support modular adaptation across materials, particle species, and detector configurations. Our approach combines Mixture-of-Experts pre-training with parameter-efficient fine-tuning strategies to enable controlled, additive model expansion without catastrophic forgetting. A pre-trained backbone is trained to generate electromagnetic showers across multiple absorber materials, while new materials are incorporated through the addition and tuning of lightweight expert modules. Extensions to new particle types are achieved via parameter-efficient fine-tuning and modular vocabularies, preserving the integrity of the base model. This design enables efficient, incremental knowledge integration as new simulation datasets become available, a critical requirement in realistic detector-development workflows. In addition, we demonstrate that next-token calorimeter models are computationally competitive with standard generative approaches under established LLM optimization procedures. These results establish next-token architectures as a viable path toward extensible, physics-aware foundation models for calorimetry and future high-energy physics experiments.