Group Contrastive Learning for Weakly Paired Multimodal Data

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of representation learning under weakly paired multimodal data—where modalities share only perturbation-level labels without sample-wise correspondence—by proposing GROOVE, a novel framework that constructs a cross-modally entangled yet group-consistent shared latent space. GROOVE introduces GroupCLIP, a pioneering group-wise contrastive loss, and an online back-translation autoencoder to align modalities effectively. The study fills a critical gap in contrastive learning for weakly paired settings, proposes a compositional evaluation framework for systematic assessment of multimodal alignment, and integrates an optimal transport-based aligner to enhance cross-modal consistency. Evaluated on both simulated and real single-cell perturbation datasets, GROOVE matches or outperforms existing methods in cross-modal matching and imputation tasks, with ablation studies confirming GroupCLIP as a key driver of performance gains.

Technology Category

Application Category

📝 Abstract
We present GROOVE, a semi-supervised multi-modal representation learning approach for high-content perturbation data where samples across modalities are weakly paired through shared perturbation labels but lack direct correspondence. Our primary contribution is GroupCLIP, a novel group-level contrastive loss that bridges the gap between CLIP for paired cross-modal data and SupCon for uni-modal supervised contrastive learning, addressing a fundamental gap in contrastive learning for weakly-paired settings. We integrate GroupCLIP with an on-the-fly backtranslating autoencoder framework to encourage cross-modally entangled representations while maintaining group-level coherence within a shared latent space. Critically, we introduce a comprehensive combinatorial evaluation framework that systematically assesses representation learners across multiple optimal transport aligners, addressing key limitations in existing evaluation strategies. This framework includes novel simulations that systematically vary shared versus modality-specific perturbation effects enabling principled assessment of method robustness. Our combinatorial benchmarking reveals that there is not yet an aligner that uniformly dominates across settings or modality pairs. Across simulations and two real single-cell genetic perturbation datasets, GROOVE performs on par with or outperforms existing approaches for downstream cross-modal matching and imputation tasks. Our ablation studies demonstrate that GroupCLIP is the key component driving performance gains. These results highlight the importance of leveraging group-level constraints for effective multi-modal representation learning in scenarios where only weak pairing is available.
Problem

Research questions and friction points this paper is trying to address.

weakly paired multimodal data
contrastive learning
representation learning
cross-modal alignment
perturbation data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Group Contrastive Learning
Weakly Paired Multimodal Data
GroupCLIP
Cross-modal Representation Learning
Combinatorial Evaluation Framework
Aditya Gorla
Aditya Gorla
UCLA
Machine learningAI/ML in HealthcareComputational BiologyStatistical Genomics
H
Hugues van Assel
Research and Early Development (gRED), Genentech; Biology Research | AI Development (BRAID), Genentech
J
Jan-Christian Huetter
Biology Research | AI Development (BRAID), Genentech
Heming Yao
Heming Yao
Genentech
Kyunghyun Cho
Kyunghyun Cho
New York University, Genentech
Machine LearningDeep Learning
A
Aviv Regev
Research and Early Development (gRED), Genentech
R
Russell Littman
Research and Early Development (gRED), Genentech