To Augment or Not to Augment? Diagnosing Distributional Symmetry Breaking

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper challenges the foundational assumption underlying symmetry-aware methods—such as data augmentation and equivariant architectures—that transformed samples remain highly relevant under the test distribution. Method: We propose a two-sample neural classifier-based symmetry testing framework to quantify dataset anisotropy, and introduce a novel metric revealing pronounced directional bias across multiple point cloud benchmarks. We theoretically analyze invariant ridge regression in the infinite-feature limit, establishing that distributional asymmetry fundamentally constrains the optimal performance of invariance-based methods—even when labels are transformation-invariant. Results: Our analysis shows that equivariant methods’ efficacy critically depends on intrinsic data symmetry; on strongly anisotropic data, augmentation can degrade generalization. The work refutes the conventional wisdom that symmetry priors are universally beneficial, providing both a verifiable diagnostic tool and rigorous theoretical boundaries for the applicability of symmetry-aware learning.

Technology Category

Application Category

📝 Abstract
Symmetry-aware methods for machine learning, such as data augmentation and equivariant architectures, encourage correct model behavior on all transformations (e.g. rotations or permutations) of the original dataset. These methods can improve generalization and sample efficiency, under the assumption that the transformed datapoints are highly probable, or "important", under the test distribution. In this work, we develop a method for critically evaluating this assumption. In particular, we propose a metric to quantify the amount of anisotropy, or symmetry-breaking, in a dataset, via a two-sample neural classifier test that distinguishes between the original dataset and its randomly augmented equivalent. We validate our metric on synthetic datasets, and then use it to uncover surprisingly high degrees of alignment in several benchmark point cloud datasets. We show theoretically that distributional symmetry-breaking can actually prevent invariant methods from performing optimally even when the underlying labels are truly invariant, as we show for invariant ridge regression in the infinite feature limit. Empirically, we find that the implication for symmetry-aware methods is dataset-dependent: equivariant methods still impart benefits on some anisotropic datasets, but not others. Overall, these findings suggest that understanding equivariance -- both when it works, and why -- may require rethinking symmetry biases in the data.
Problem

Research questions and friction points this paper is trying to address.

Quantifying dataset anisotropy via neural classifier testing
Evaluating symmetry-breaking impact on equivariant methods
Rethinking symmetry biases in real-world benchmark datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes metric to quantify dataset anisotropy via neural classifier
Validates metric on synthetic and benchmark point cloud datasets
Shows symmetry-breaking can hinder invariant methods' optimal performance
🔎 Similar Papers
No similar papers found.