Complex-Valued Unitary Representations as Classification Heads for Improved Uncertainty Quantification in Deep Neural Networks

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the poor calibration of deep neural networks, where predicted confidence often misaligns with actual accuracy. The authors propose a quantum-inspired complex-valued classification head that maps backbone features into a Hilbert space and evolves them via Cayley-parameterized unitary transformations—introducing, for the first time, complex-valued unitary representations into classifier design. This approach effectively models uncertainty and aligns with the fuzzy structure inherent in human perception. Theoretical analysis reveals a geometric connection between unitary dynamics and calibration performance. Empirical results demonstrate significant improvements: on CIFAR-10, the expected calibration error (ECE) is reduced to 0.0146, a 2.4× improvement over standard softmax; on CIFAR-10H, the method achieves a KL divergence of 0.336, substantially outperforming existing approaches.

Technology Category

Application Category

📝 Abstract
Modern deep neural networks achieve high predictive accuracy but remain poorly calibrated: their confidence scores do not reliably reflect the true probability of correctness. We propose a quantum-inspired classification head architecture that projects backbone features into a complex-valued Hilbert space and evolves them under a learned unitary transformation parameterised via the Cayley map. Through a controlled hybrid experimental design - training a single shared backbone and comparing lightweight interchangeable heads - we isolate the effect of complex-valued unitary representations on calibration. Our ablation study on CIFAR-10 reveals that the unitary magnitude head (complex features evolved under a Cayley unitary, read out via magnitude and softmax) achieves an Expected Calibration Error (ECE) of 0.0146, representing a 2.4x improvement over a standard softmax head (0.0355) and a 3.5x improvement over temperature scaling (0.0510). Surprisingly, replacing the softmax readout with a Born rule measurement layer - the quantum-mechanically motivated approach - degrades calibration to an ECE of 0.0819. On the CIFAR-10H human-uncertainty benchmark, the wave function head achieves the lowest KL-divergence (0.336) to human soft labels among all compared methods, indicating that complex-valued representations better capture the structure of human perceptual ambiguity. We provide theoretical analysis connecting norm-preserving unitary dynamics to calibration through feature-space geometry, report negative results on out-of-distribution detection and sentiment analysis to delineate the method's scope, and discuss practical implications for safety-critical applications. Code is publicly available.
Problem

Research questions and friction points this paper is trying to address.

uncertainty quantification
model calibration
deep neural networks
confidence reliability
expected calibration error
Innovation

Methods, ideas, or system contributions that make the work stand out.

complex-valued representations
unitary transformation
uncertainty calibration
Cayley map
quantum-inspired neural networks
🔎 Similar Papers
No similar papers found.
A
Akbar Anbar Jafari
University of Tartu
Cagri Ozcinar
Cagri Ozcinar
Machine Learning Scientist
MultimediaImage ProcessingVideo StreamingVisual AttentionMachine Learning
G
Gholamreza Anbarjafari
3S Holding OÜ