🤖 AI Summary
This work proposes SegReg, a novel framework that addresses the limited generalization of existing medical image segmentation models, which typically optimize only in the output space and thus yield under-constrained latent feature representations. SegReg introduces explicit latent space regularization into medical image segmentation for the first time, imposing structural constraints directly on U-Net feature maps to encourage more discriminative embeddings. The approach seamlessly integrates with standard segmentation losses without introducing additional parameters or memory overhead. Integrated into nnU-Net, SegReg demonstrates significant improvements in domain generalization across prostate, cardiac, and hippocampus segmentation tasks. Furthermore, it effectively mitigates task drift in continual learning scenarios, enhancing both forward transfer and model stability.
📝 Abstract
Medical image segmentation models are typically optimised with voxel-wise losses that constrain predictions only in the output space. This leaves latent feature representations largely unconstrained, potentially limiting generalisation. We propose {SegReg}, a latent-space regularisation framework that operates on feature maps of U-Net models to encourage structured embeddings while remaining fully compatible with standard segmentation losses. Integrated with the nnU-Net framework, we evaluate SegReg on prostate, cardiac, and hippocampus segmentation and demonstrate consistent improvements in domain generalisation. Furthermore, we show that explicit latent regularisation improves continual learning by reducing task drift and enhancing forward transfer across sequential tasks without adding memory or any extra parameters. These results highlight latent-space regularisation as a practical approach for building more generalisable and continual-learning-ready models.