🤖 AI Summary
To address domain shift in multicenter quantitative computed tomography (QCT) arising from variations in scanning hardware, reconstruction parameters, and population demographics, this work proposes a dual-mechanism domain-adaptive 3D TransUNet framework. It jointly integrates adversarial alignment—implemented via gradient reversal layers—and statistical alignment—based on maximum mean discrepancy—to achieve scanner-invariant feature learning while preserving critical anatomical details. Evaluated on 1,408 proximal femur QCT scans from two clinical centers, the method significantly improves cross-site segmentation stability (Dice score increase of 5.2%) and enhances consistency of quantitative biomarkers—including bone mineral density (BMD) and finite-element-derived stiffness. These advances bolster reproducibility in multicenter osteoporosis research and radiomics/structural biomechanics analyses. The proposed framework establishes a novel paradigm for medical image domain adaptation that simultaneously ensures discriminability and anatomical fidelity.
📝 Abstract
Quantitative computed tomography (QCT) plays a crucial role in assessing bone strength and fracture risk by enabling volumetric analysis of bone density distribution in the proximal femur. However, deploying automated segmentation models in practice remains difficult because deep networks trained on one dataset often fail when applied to another. This failure stems from domain shift, where scanners, reconstruction settings, and patient demographics vary across institutions, leading to unstable predictions and unreliable quantitative metrics. Overcoming this barrier is essential for multi-center osteoporosis research and for ensuring that radiomics and structural finite element analysis results remain reproducible across sites. In this work, we developed a domain-adaptive transformer segmentation framework tailored for multi-institutional QCT. Our model is trained and validated on one of the largest hip fracture related research cohorts to date, comprising 1,024 QCT images scans from Tulane University and 384 scans from Rochester, Minnesota for proximal femur segmentation. To address domain shift, we integrate two complementary strategies within a 3D TransUNet backbone: adversarial alignment via Gradient Reversal Layer (GRL), which discourages the network from encoding site-specific cues, and statistical alignment via Maximum Mean Discrepancy (MMD), which explicitly reduces distributional mismatches between institutions. This dual mechanism balances invariance and fine-grained alignment, enabling scanner-agnostic feature learning while preserving anatomical detail.