Learning Equivariant Functions via Quadratic Forms

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of automatically discovering and modeling equivariant functions with respect to latent symmetry groups—particularly orthogonal groups—directly from data. The proposed method introduces a quadratic-form-based learning framework, $x^ op A x$, which decomposes equivariant functions into norm-invariant and scale-invariant components; it further generalizes to multi-input settings, enabling explicit separation of angular (rotation-invariant) and Gram-matrix-dependent components. By integrating symmetric matrix diagonalization, group invariance theory, and structured neural network design, the approach guarantees strict orthogonal equivariance while supporting inference of unknown group structures. Experiments on polynomial regression, top-quark tagging, and moment-of-inertia prediction demonstrate significant improvements in symmetry identification accuracy and functional learning efficiency over state-of-the-art equivariant baselines.

Technology Category

Application Category

📝 Abstract
In this study, we introduce a method for learning group (known or unknown) equivariant functions by learning the associated quadratic form $x^T A x$ corresponding to the group from the data. Certain groups, known as orthogonal groups, preserve a specific quadratic form, and we leverage this property to uncover the underlying symmetry group under the assumption that it is orthogonal. By utilizing the corresponding unique symmetric matrix and its inherent diagonal form, we incorporate suitable inductive biases into the neural network architecture, leading to models that are both simplified and efficient. Our approach results in an invariant model that preserves norms, while the equivariant model is represented as a product of a norm-invariant model and a scale-invariant model, where the ``product'' refers to the group action. Moreover, we extend our framework to a more general setting where the function acts on tuples of input vectors via a diagonal (or product) group action. In this extension, the equivariant function is decomposed into an angular component extracted solely from the normalized first vector and a scale-invariant component that depends on the full Gram matrix of the tuple. This decomposition captures the inter-dependencies between multiple inputs while preserving the underlying group symmetry. We assess the effectiveness of our framework across multiple tasks, including polynomial regression, top quark tagging, and moment of inertia matrix prediction. Comparative analysis with baseline methods demonstrates that our model consistently excels in both discovering the underlying symmetry and efficiently learning the corresponding equivariant function.
Problem

Research questions and friction points this paper is trying to address.

Learning group equivariant functions via quadratic forms
Uncovering orthogonal symmetry groups from data
Decomposing equivariant functions into invariant components
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning equivariant functions via quadratic forms
Incorporating inductive biases using symmetric matrices
Decomposing equivariant functions into invariant components
🔎 Similar Papers
No similar papers found.
P
Pavan Karjol
Department of Electrical Communication Engineering, Indian Institute of Science, Bengaluru, Karnataka
V
Vivek V Kashyap
Department of Electrical Communication Engineering, Indian Institute of Science, Bengaluru, Karnataka
R
Rohan Kashyap
Computer Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA
Prathosh A P
Prathosh A P
Assistant Professor, Indian Institute of Science, Bengaluru
Representational learning for Digital Health