Sparse Mixture-of-Experts for Multi-Channel Imaging: Are All Channel Interactions Required?

πŸ“… 2025-11-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the high FLOPs and training cost arising from redundant cross-channel attention computation in multi-channel images (e.g., cellular staining, satellite imagery), this paper proposes MoE-ViTβ€”the first vision Transformer framework integrating sparse Mixture-of-Experts (MoE). It innovatively treats *channels* as experts and employs a lightweight routing network to dynamically select the most relevant channel subset for each image patch, enabling channel-wise sparse interaction. This design challenges the implicit assumption that all channel pairs must be fully connected, thereby substantially mitigating the quadratic growth of attention complexity while preserving or even improving accuracy. Extensive experiments on JUMP-CP and So2Sat demonstrate that MoE-ViT significantly reduces FLOPs and training overhead while achieving state-of-the-art performance, establishing it as a practical and efficient backbone architecture for multi-channel imaging.

Technology Category

Application Category

πŸ“ Abstract
Vision Transformers ($ ext{ViTs}$) have become the backbone of vision foundation models, yet their optimization for multi-channel domains - such as cell painting or satellite imagery - remains underexplored. A key challenge in these domains is capturing interactions between channels, as each channel carries different information. While existing works have shown efficacy by treating each channel independently during tokenization, this approach naturally introduces a major computational bottleneck in the attention block - channel-wise comparisons leads to a quadratic growth in attention, resulting in excessive $ ext{FLOPs}$ and high training cost. In this work, we shift focus from efficacy to the overlooked efficiency challenge in cross-channel attention and ask: "Is it necessary to model all channel interactions?". Inspired by the philosophy of Sparse Mixture-of-Experts ($ ext{MoE}$), we propose MoE-ViT, a Mixture-of-Experts architecture for multi-channel images in $ ext{ViTs}$, which treats each channel as an expert and employs a lightweight router to select only the most relevant experts per patch for attention. Proof-of-concept experiments on real-world datasets - JUMP-CP and So2Sat - demonstrate that $ ext{MoE-ViT}$ achieves substantial efficiency gains without sacrificing, and in some cases enhancing, performance, making it a practical and attractive backbone for multi-channel imaging.
Problem

Research questions and friction points this paper is trying to address.

Optimizing Vision Transformers for multi-channel imaging domains
Addressing computational bottlenecks in cross-channel attention mechanisms
Reducing excessive FLOPs and training costs in channel interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

MoE-ViT treats each channel as an expert
A lightweight router selects relevant experts per patch
It reduces computational cost while maintaining performance
Sukwon Yun
Sukwon Yun
PhD Student, UNC Chapel Hill
Machine LearningComputational Biology
Heming Yao
Heming Yao
Genentech
B
Burkhard Hoeckendorf
Biology Research β€” AI Development (BRAID), Genentech
David Richmond
David Richmond
AI and Machine Learning Scientist
computer vision for biomedical images
A
Aviv Regev
Research and Early Development (gRED), Genentech
R
Russell Littman
Research and Early Development (gRED), Genentech