🤖 AI Summary
This study introduces visual-language models (VLMs) to neutrino event classification in high-energy physics—a first-of-its-kind application—addressing semantic understanding and contextual modeling limitations inherent in conventional convolutional neural networks (CNNs). Methodologically, we develop an end-to-end differentiable VLM architecture based on LLaMA 3.2, integrating pixelated detector images with structured semantic prompts to enable cross-modal feature alignment and multi-step reasoning. Experimental results demonstrate that our model matches or surpasses state-of-the-art CNN baselines across accuracy, precision, recall, and AUC-ROC. Key contributions include: (1) the pioneering adaptation of VLMs to particle physics event identification; (2) empirical validation of multimodal representations for modeling complex physical processes; and (3) a novel analytical paradigm for high-energy physics data, offering enhanced interpretability and superior generalization capability.
📝 Abstract
Recent progress in large language models (LLMs) has shown strong potential for multimodal reasoning beyond natural language. In this work, we explore the use of a fine-tuned Vision-Language Model (VLM), based on LLaMA 3.2, for classifying neutrino interactions from pixelated detector images in high-energy physics (HEP) experiments. We benchmark its performance against an established CNN baseline used in experiments like NOvA and DUNE, evaluating metrics such as classification accuracy, precision, recall, and AUC-ROC. Our results show that the VLM not only matches or exceeds CNN performance but also enables richer reasoning and better integration of auxiliary textual or semantic context. These findings suggest that VLMs offer a promising general-purpose backbone for event classification in HEP, paving the way for multimodal approaches in experimental neutrino physics.