🤖 AI Summary
This work addresses complex-valued scaling—a symmetry unique to the complex domain—highlighting that existing deep complex networks (DCNs) lack explicit scaling invariance, while SurReal achieves scaling invariance at the cost of discarding critical complex information (e.g., phase). To resolve this trade-off, we first formulate complex scaling as a **co-domain symmetry**, rather than an input-domain transformation, and propose the **Co-domain Symmetry (CDS)** architecture. CDS introduces strictly scaling-equivariant complex-valued neural layers that preserve full phase information while ensuring equivariance. Additionally, we design a novel complex-valued RGB representation where scaling operations acquire semantic meaning—e.g., chromatic shifts or coordinated channel variations. Evaluated on MSTAR, CIFAR-10/100, and SVHN, CDS consistently outperforms DCNs and SurReal: it achieves higher accuracy, superior generalization and co-domain robustness, lower bias and variance, and significantly reduced parameter count.
📝 Abstract
We study complex-valued scaling as a type of symmetry natural and unique to complex-valued measurements and representations. Deep Complex Networks (DCN) extend real-valued algebra to the complex domain without addressing complex-valued scaling. SurReal extends manifold learning to the complex plane, achieving scaling invariance with manifold distances that discard phase information. Treating complex-valued scaling as a co-domain transformation, we design novel equivariant/invariant layer functions and architectures that exploit co-domain symmetry. We also propose novel complex-valued representations of RGB images, where complex-valued scaling indicates hue shift or correlated changes across color channels. Benchmarked on MSTAR, CIFAR10, CIFAR100, and SVHN, our co-domain symmetric (CDS) classifiers deliver higher accuracy, better generalization, more robustness to co-domain transformations, and lower model bias and variance than DCN and SurReal with far fewer parameters.