π€ AI Summary
Existing Lorentz-equivariant neural networks rely on custom-designed layers, hindering integration with general-purpose architectures. Method: We propose the Lorentz Local Calibration (LLoCa) frameworkβthe first method enabling exact Lorentz equivariance for arbitrary backbone networks without structural modification. LLoCa establishes a generic equivariant paradigm grounded in locally calibrated reference frames, extends geometric message passing to the non-compact Lorentz group, and unifies data augmentation as a special case of reference-frame selection. Integrating Lorentz representation theory, spacetime-propagated tensor features, and equivariant graph networks, we design the LLoCa-Transformer architecture. Contribution/Results: On particle physics benchmarks, LLoCa achieves state-of-the-art accuracy while accelerating inference by 4Γ and reducing computational cost by 5β100Γ in FLOPs compared to prior Lorentz-equivariant models.
π Abstract
Lorentz-equivariant neural networks are becoming the leading architectures for high-energy physics. Current implementations rely on specialized layers, limiting architectural choices. We introduce Lorentz Local Canonicalization (LLoCa), a general framework that renders any backbone network exactly Lorentz-equivariant. Using equivariantly predicted local reference frames, we construct LLoCa-transformers and graph networks. We adapt a recent approach to geometric message passing to the non-compact Lorentz group, allowing propagation of space-time tensorial features. Data augmentation emerges from LLoCa as a special choice of reference frame. Our models surpass state-of-the-art accuracy on relevant particle physics tasks, while being $4 imes$ faster and using $5$-$100 imes$ fewer FLOPs.