π€ AI Summary
This work addresses a key performance bottleneck in multiple instance learning (MIL)βthe reliance on linear transformations from generic to task-specific features. To overcome this limitation, the authors propose the MAMMOTH module, which introduces, for the first time, a phenotype-aware low-rank mixture-of-experts mechanism that dynamically generates lightweight, task-specific transformations for each image patch. By integrating low-rank matrix decomposition with dynamic gating, MAMMOTH enables fine-grained feature modulation with negligible parameter overhead and can be seamlessly plugged into any existing MIL architecture. Extensive evaluation across 8 MIL methods and 19 classification tasks (152 configurations in total) demonstrates consistent improvements: 130 configurations achieve higher performance, with an average accuracy gain of 3.8%. Notably, even simple aggregation strategies like mean pooling, when augmented with MAMMOTH, surpass current state-of-the-art approaches.
π Abstract
Multiple Instance Learning (MIL) is the predominant framework for classifying gigapixel whole-slide images in computational pathology. MIL follows a sequence of 1) extracting patch features, 2) applying a linear layer to obtain task-specific patch features, and 3) aggregating the patches into a slide feature for classification. While substantial efforts have been devoted to optimizing patch feature extraction and aggregation, none have yet addressed the second point, the critical layer which transforms general-purpose features into task-specific features. We hypothesize that this layer constitutes an overlooked performance bottleneck and that stronger representations can be achieved with a low-rank transformation tailored to each patch's phenotype, yielding synergistic effects with any of the existing MIL approaches. To this end, we introduce MAMMOTH, a parameter-efficient, multi-head mixture of experts module designed to improve the performance of any MIL model with minimal alterations to the total number of parameters. Across eight MIL methods and 19 different classification tasks, we find that such task-specific transformation has a larger effect on performance than the choice of aggregation method. For instance, when equipped with MAMMOTH, even simple methods such as max or mean pooling attain higher average performance than any method with the standard linear layer. Overall, MAMMOTH improves performance in 130 of the 152 examined configurations, with an average $+3.8\%$ change in performance. Code is available at https://github.com/mahmoodlab/mammoth.