🤖 AI Summary
Existing cardiac segmentation methods suffer from anatomical distortions under low-visibility conditions and rely on auxiliary modules, leading to high computational complexity, cumbersome multi-stage training, and poor robustness. To address these limitations, we propose a novel Transformer-based architecture centered on an input-agnostic Dense Association Network (DAN), which memorizes a compact set of canonical cardiac anatomical patterns and adaptively fuses them via learned weights—enabling end-to-end anatomically consistent segmentation without auxiliary components or additional training stages. Crucially, this work introduces, for the first time, an explicit pattern memory mechanism into cardiac segmentation. Evaluated on CAMUS and CardiacNet, our method consistently outperforms state-of-the-art baselines: Dice score improves by 1.2–2.4%, average surface distance (ASD) decreases by 18.7–26.3%, and 95th-percentile Hausdorff distance (HD95) reduces by 15.5–22.1%, demonstrating superior anatomical fidelity and generalization robustness—particularly on low-quality echocardiographic images.
📝 Abstract
Deep learning-based cardiac segmentation has seen significant advancements over the years. Many studies have tackled the challenge of anatomically incorrect segmentation predictions by introducing auxiliary modules. These modules either post-process segmentation outputs or enforce consistency between specific points to ensure anatomical correctness. However, such approaches often increase network complexity, require separate training for these modules, and may lack robustness in scenarios with poor visibility. To address these limitations, we propose a novel transformer-based architecture that leverages dense associative networks to learn and retain specific patterns inherent to cardiac inputs. Unlike traditional methods, our approach restricts the network to memorize a limited set of patterns. During forward propagation, a weighted sum of these patterns is used to enforce anatomical correctness in the output. Since these patterns are input-independent, the model demonstrates enhanced robustness, even in cases with poor visibility. The proposed pipeline was evaluated on two publicly available datasets, CAMUS and CardiacNet. Experimental results indicate that our model consistently outperforms baseline approaches across all metrics, highlighting its effectiveness and reliability for cardiac segmentation tasks.