🤖 AI Summary
Low-dose CT (LDCT) image reconstruction faces challenges in cross-center and cross-scan-protocol generalization under privacy constraints. To address this, we propose a dual physics-driven personalized federated learning framework operating at both the scan-level and anatomical-level. Our method introduces a novel physics-guided hypernetwork that jointly incorporates scan protocol and anatomical structure prompts; a Protocol Vector Quantization Strategy (PVQS) for robust adaptation to unseen acquisition protocols; and anatomical reports generated by a medical large language model (MLLM) as personalized anatomical priors. Evaluated on multiple public LDCT datasets, our approach significantly outperforms existing federated methods—maintaining high reconstruction fidelity under unseen protocols while simultaneously enhancing noise suppression and anatomical structure preservation. The framework ensures strict privacy compliance, broad generalizability across heterogeneous clinical sites and protocols, and improved clinical utility.
📝 Abstract
Reducing radiation doses benefits patients, however, the resultant low-dose computed tomography (LDCT) images often suffer from clinically unacceptable noise and artifacts. While deep learning (DL) shows promise in LDCT reconstruction, it requires large-scale data collection from multiple clients, raising privacy concerns. Federated learning (FL) has been introduced to address these privacy concerns; however, current methods are typically tailored to specific scanning protocols, which limits their generalizability and makes them less effective for unseen protocols. To address these issues, we propose SCAN-PhysFed, a novel SCanning- and ANatomy-level personalized Physics-Driven Federated learning paradigm for LDCT reconstruction. Since the noise distribution in LDCT data is closely tied to scanning protocols and anatomical structures being scanned, we design a dual-level physics-informed way to address these challenges. Specifically, we incorporate physical and anatomical prompts into our physics-informed hypernetworks to capture scanning- and anatomy-specific information, enabling dual-level physics-driven personalization of imaging features. These prompts are derived from the scanning protocol and the radiology report generated by a medical large language model (MLLM), respectively. Subsequently, client-specific decoders project these dual-level personalized imaging features back into the image domain. Besides, to tackle the challenge of unseen data, we introduce a novel protocol vector-quantization strategy (PVQS), which ensures consistent performance across new clients by quantifying the unseen scanning code as one of the codes in the scanning codebook. Extensive experimental results demonstrate the superior performance of SCAN-PhysFed on public datasets.