🤖 AI Summary
To address insufficient model generalization in open-set recognition, this paper proposes a first-layer convolutional kernel adaptive near-orthogonalization regularization method. Unlike conventional hard orthogonality constraints, our approach employs a learnable, gradient-driven mechanism that enables the network to autonomously identify and enforce near-orthogonality only among semantically dissimilar kernel pairs—thereby alleviating optimization difficulties and enhancing exploration of the solution space. The proposed regularizer is architecture-agnostic: it requires no modification to backbone networks and supports end-to-end training with diverse architectures including ResNet-50, DenseNet-121, and ViT-B/16. Evaluated on two real-world open-set tasks—iris presentation attack detection and chest X-ray abnormality detection—our method consistently outperforms standard orthogonalization and saliency-based regularization baselines, achieving substantial gains in generalization performance. Results empirically validate that decoupling first-layer features via adaptive near-orthogonalization is critical for open-set robustness.
📝 Abstract
An ongoing research challenge within several domains in computer vision is how to increase model generalization capabilities. Several attempts to improve model generalization performance are heavily inspired by human perceptual intelligence, which is remarkable in both its performance and efficiency to generalize to unknown samples. Many of these methods attempt to force portions of the network to be orthogonal, following some observation within neuroscience related to early vision processes. In this paper, we propose a loss component that regularizes the filtering kernels in the first convolutional layer of a network to make them nearly orthogonal. Deviating from previous works, we give the network flexibility in which pairs of kernels it makes orthogonal, allowing the network to navigate to a better solution space, imposing harsh penalties. Without architectural modifications, we report substantial gains in generalization performance using the proposed loss against previous works (including orthogonalization- and saliency-based regularization methods) across three different architectures (ResNet-50, DenseNet-121, ViT-b-16) and two difficult open-set recognition tasks: presentation attack detection in iris biometrics, and anomaly detection in chest X-ray images.