Co-domain Symmetry for Complex-Valued Deep Learning

📅 2021-12-02
🏛️ Computer Vision and Pattern Recognition
📈 Citations: 11
Influential: 3
📄 PDF
🤖 AI Summary
This work addresses complex-valued scaling—a symmetry unique to the complex domain—highlighting that existing deep complex networks (DCNs) lack explicit scaling invariance, while SurReal achieves scaling invariance at the cost of discarding critical complex information (e.g., phase). To resolve this trade-off, we first formulate complex scaling as a **co-domain symmetry**, rather than an input-domain transformation, and propose the **Co-domain Symmetry (CDS)** architecture. CDS introduces strictly scaling-equivariant complex-valued neural layers that preserve full phase information while ensuring equivariance. Additionally, we design a novel complex-valued RGB representation where scaling operations acquire semantic meaning—e.g., chromatic shifts or coordinated channel variations. Evaluated on MSTAR, CIFAR-10/100, and SVHN, CDS consistently outperforms DCNs and SurReal: it achieves higher accuracy, superior generalization and co-domain robustness, lower bias and variance, and significantly reduced parameter count.
📝 Abstract
We study complex-valued scaling as a type of symmetry natural and unique to complex-valued measurements and representations. Deep Complex Networks (DCN) extend real-valued algebra to the complex domain without addressing complex-valued scaling. SurReal extends manifold learning to the complex plane, achieving scaling invariance with manifold distances that discard phase information. Treating complex-valued scaling as a co-domain transformation, we design novel equivariant/invariant layer functions and architectures that exploit co-domain symmetry. We also propose novel complex-valued representations of RGB images, where complex-valued scaling indicates hue shift or correlated changes across color channels. Benchmarked on MSTAR, CIFAR10, CIFAR100, and SVHN, our co-domain symmetric (CDS) classifiers deliver higher accuracy, better generalization, more robustness to co-domain transformations, and lower model bias and variance than DCN and SurReal with far fewer parameters.
Problem

Research questions and friction points this paper is trying to address.

Analyzing complex-valued scaling as co-domain transformation
Designing equivariant and invariant neural network layers
Proposing complex-valued RGB representations for hue shifts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Equivariant neural layers for complex scaling
Novel complex-valued RGB image representations
Co-domain symmetric classifiers enhance accuracy
🔎 Similar Papers
No similar papers found.