Lorentz Local Canonicalization: How to Make Any Network Lorentz-Equivariant

πŸ“… 2025-05-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing Lorentz-equivariant neural networks rely on custom-designed layers, hindering integration with general-purpose architectures. Method: We propose the Lorentz Local Calibration (LLoCa) frameworkβ€”the first method enabling exact Lorentz equivariance for arbitrary backbone networks without structural modification. LLoCa establishes a generic equivariant paradigm grounded in locally calibrated reference frames, extends geometric message passing to the non-compact Lorentz group, and unifies data augmentation as a special case of reference-frame selection. Integrating Lorentz representation theory, spacetime-propagated tensor features, and equivariant graph networks, we design the LLoCa-Transformer architecture. Contribution/Results: On particle physics benchmarks, LLoCa achieves state-of-the-art accuracy while accelerating inference by 4Γ— and reducing computational cost by 5–100Γ— in FLOPs compared to prior Lorentz-equivariant models.

Technology Category

Application Category

πŸ“ Abstract
Lorentz-equivariant neural networks are becoming the leading architectures for high-energy physics. Current implementations rely on specialized layers, limiting architectural choices. We introduce Lorentz Local Canonicalization (LLoCa), a general framework that renders any backbone network exactly Lorentz-equivariant. Using equivariantly predicted local reference frames, we construct LLoCa-transformers and graph networks. We adapt a recent approach to geometric message passing to the non-compact Lorentz group, allowing propagation of space-time tensorial features. Data augmentation emerges from LLoCa as a special choice of reference frame. Our models surpass state-of-the-art accuracy on relevant particle physics tasks, while being $4 imes$ faster and using $5$-$100 imes$ fewer FLOPs.
Problem

Research questions and friction points this paper is trying to address.

Enabling Lorentz-equivariance in any neural network architecture
Overcoming limitations of specialized layers in current implementations
Improving efficiency and accuracy in particle physics tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

General framework for Lorentz-equivariant networks
Uses local reference frames for transformations
Adapts geometric message passing to Lorentz group
πŸ”Ž Similar Papers
No similar papers found.