🤖 AI Summary
This work addresses the challenge of modeling highly heterogeneous pulmonary and extrapulmonary diseases in non-contrast chest CT scans, where conventional hard parameter-sharing multitask learning approaches are limited. To overcome this, the authors propose a dynamic multitask learning framework that integrates hypernetworks with low-rank adaptation (LoRA). This approach introduces, for the first time, a low-rank hypernetwork into chest CT analysis, dynamically generating task-specific parameters to flexibly modulate a Vision Transformer backbone. The resulting model enables efficient and unified joint modeling across multiple diseases. Extensive experiments on large-scale radiology and cardiology datasets demonstrate that the proposed method significantly outperforms strong baselines, achieving enhanced performance and generalization while maintaining computational efficiency.
📝 Abstract
Non-contrast chest CTs offer a rich opportunity for both conventional pulmonary and opportunistic extra-pulmonary screening. While Multi-Task Learning (MTL) can unify these diverse tasks, standard hard-parameter sharing approaches are often suboptimal for modeling distinct pathologies. We propose HyperCT, a framework that dynamically adapts a Vision Transformer backbone via a Hypernetwork. To ensure computational efficiency, we integrate Low-Rank Adaptation (LoRA), allowing the model to regress task-specific low-rank weight updates rather than full parameters. Validated on a large-scale dataset of radiological and cardiological tasks, \method{} outperforms various strong baselines, offering a unified, parameter-efficient solution for holistic patient assessment. Our code is available at https://github.com/lfb-1/HyperCT.