🤖 AI Summary
Existing unified ultrasound foundation models often suffer performance degradation in multi-task joint training due to suboptimal task aggregation strategies, a problem exacerbated under limited data regimes. This work systematically investigates the feasibility of jointly learning heterogeneous tasks in ultrasound imaging and reveals, for the first time, that the efficacy of task aggregation critically depends on the interplay between training data scale and task type. Building upon the DINOv3 architecture, we propose M2DINO, a unified framework incorporating a task-conditioned Mixture-of-Experts module capable of supporting 27 diverse ultrasound tasks spanning segmentation, classification, detection, and regression. Empirical results demonstrate that training all tasks jointly yields more stable performance than clinically motivated groupings, with segmentation tasks exhibiting the highest susceptibility to negative transfer, while classification and regression tasks show greater robustness.
📝 Abstract
Foundation models promise to unify multiple clinical tasks within a single framework, but recent ultrasound studies report that unified models can underperform task-specific baselines. We hypothesize that this degradation arises not from model capacity limitations, but from task aggregation strategies that ignore interactions between task heterogeneity and available training data scale. In this work, we systematically analyze when heterogeneous ultrasound tasks can be jointly learned without performance loss, establishing practical criteria for task aggregation in unified clinical imaging models. We introduce M2DINO, a multi-organ, multi-task framework built on DINOv3 with task-conditioned Mixture-of-Experts blocks for adaptive capacity allocation. We systematically evaluate 27 ultrasound tasks spanning segmentation, classification, detection, and regression under three paradigms: task-specific, clinically-grouped, and all-task unified training. Our results show that aggregation effectiveness depends strongly on training data scale. While clinically-grouped training can improve performance in data-rich settings, it may induce substantial negative transfer in low-data settings. In contrast, all-task unified training exhibits more consistent performance across clinical groups. We further observe that task sensitivity varies by task type in our experiments: segmentation shows the largest performance drops compared with regression and classification. These findings provide practical guidance for ultrasound foundation models, emphasizing that aggregation strategies should jointly consider training data availability and task characteristics rather than relying on clinical taxonomy alone.