Conservative&Aggressive NaNs Accelerate U-Nets for Neuroimaging

๐Ÿ“… 2026-01-23
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the inefficiency in deep learning models for neuroimaging, where redundant convolutions are often performed on voxels dominated by numerical noise. To mitigate this, the authors propose identifying unstable voxels via numerical uncertainty and marking them as NaN. By introducing conservative and aggressive NaN-aware pooling and unpooling operations, the method dynamically skips subsequent convolution computations over NaN-containing regions without altering the network architecture. Experiments demonstrate that this approach maintains model performance while reducing convolution operations by 30% on averageโ€”up to 64.64% in a single layer. Notably, when inputs contain more than two-thirds NaN values, inference speed increases by 1.67ร—, marking the first effective use of numerical uncertainty to enhance computational efficiency in neuroimaging models.

Technology Category

Application Category

๐Ÿ“ Abstract
Deep learning models for neuroimaging increasingly rely on large architectures, making efficiency a persistent concern despite advances in hardware. Through an analysis of numerical uncertainty of convolutional neural networks (CNNs), we observe that many operations are applied to values dominated by numerical noise and have negligible influence on model outputs. In some models, up to two-thirds of convolution operations appear redundant. We introduce Conservative&Aggressive NaNs, two novel variants of max pooling and unpooling that identify numerically unstable voxels and replace them with NaNs, allowing subsequent layers to skip computations on irrelevant data. Both methods are implemented within PyTorch and require no architectural changes. We evaluate these approaches on four CNN models spanning neuroimaging and image classification tasks. For inputs containing at least 50% NaNs, we observe consistent runtime improvements; for data with more than two-thirds NaNs )common in several neuroimaging settings) we achieve an average inference speedup of 1.67x. Conservative NaNs reduces convolution operations by an average of 30% across models and datasets, with no measurable performance degradation, and can skip up to 64.64% of convolutions in specific layers. Aggressive NaNs can skip up to 69.30% of convolutions but may occasionally affect performance. Overall, these methods demonstrate that numerical uncertainty can be exploited to reduce redundant computation and improve inference efficiency in CNNs.
Problem

Research questions and friction points this paper is trying to address.

neuroimaging
computational redundancy
numerical uncertainty
convolutional neural networks
inference efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

NaN-based pruning
numerical uncertainty
efficient CNN inference
max pooling optimization
neuroimaging acceleration
๐Ÿ”Ž Similar Papers
No similar papers found.
I
Inรฉs Gonzalez-Pepe
Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada
V
Vinuyan Sivakolunthu
Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada
J
Jacob Fortin
Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada
Yohan Chatelain
Yohan Chatelain
Centre for Addiction and Mental Health (CAMH)
computer sciences
Tristan Glatard
Tristan Glatard
Centre for Addiction and Mental Health (CAMH)
Neuroinformatics