🤖 AI Summary
Large-kernel convolutions in 3D medical imaging suffer from training instability and performance degradation due to excessively enlarged receptive fields.
Method: This paper proposes a structured reparameterization framework incorporating learnable spatial priors. Specifically: (1) it establishes the first theoretical connection between convolutional meta-gradients and first-order optimization, deriving spatially variant learning rates; (2) it designs a lightweight two-stage modulation network to generate receptive-field bias masks, enabling local-global co-optimization; (3) it adopts a pure-encoder architecture with large-kernel depthwise convolutions—eliminating multi-branch designs—and integrates low-rank receptive-field modeling with optimization-aware training.
Results: The method achieves state-of-the-art performance across five 3D medical segmentation benchmarks, consistently outperforming both Transformer-based and existing reparameterization approaches, while significantly improving segmentation accuracy and training stability.
📝 Abstract
In contrast to vision transformers, which model long-range dependencies through global self-attention, large kernel convolutions provide a more efficient and scalable alternative, particularly in high-resolution 3D volumetric settings. However, naively increasing kernel size often leads to optimization instability and degradation in performance. Motivated by the spatial bias observed in effective receptive fields (ERFs), we hypothesize that different kernel elements converge at variable rates during training. To support this, we derive a theoretical connection between element-wise gradients and first-order optimization, showing that structurally re-parameterized convolution blocks inherently induce spatially varying learning rates. Building on this insight, we introduce Rep3D, a 3D convolutional framework that incorporates a learnable spatial prior into large kernel training. A lightweight two-stage modulation network generates a receptive-biased scaling mask, adaptively re-weighting kernel updates and enabling local-to-global convergence behavior. Rep3D adopts a plain encoder design with large depthwise convolutions, avoiding the architectural complexity of multi-branch compositions. We evaluate Rep3D on five challenging 3D segmentation benchmarks and demonstrate consistent improvements over state-of-the-art baselines, including transformer-based and fixed-prior re-parameterization methods. By unifying spatial inductive bias with optimization-aware learning, Rep3D offers an interpretable, and scalable solution for 3D medical image analysis. The source code is publicly available at https://github.com/leeh43/Rep3D.