🤖 AI Summary
Existing methods struggle to predict differential expression, directional changes, and gene set enrichment under unseen biological perturbations—an open challenge in computational biology. To address this, we propose SUMMER, the first framework to deeply integrate large language models (LLMs) into the core task of perturbation modeling. SUMMER introduces PerturbQA, the first benchmark designed for perturbation semantic understanding, and synergistically combines retrieval-augmented generation (RAG) with domain-knowledge-guided reasoning to unify semantic logical comprehension and quantitative prediction. Experiments demonstrate that SUMMER achieves or surpasses state-of-the-art performance across three key tasks: differential expression prediction, direction classification, and pathway enrichment inference—while significantly improving generalization to unseen perturbations. All code and data are publicly released.
📝 Abstract
High-content perturbation experiments allow scientists to probe biomolecular systems at unprecedented resolution, but experimental and analysis costs pose significant barriers to widespread adoption. Machine learning has the potential to guide efficient exploration of the perturbation space and extract novel insights from these data. However, current approaches neglect the semantic richness of the relevant biology, and their objectives are misaligned with downstream biological analyses. In this paper, we hypothesize that large language models (LLMs) present a natural medium for representing complex biological relationships and rationalizing experimental outcomes. We propose PerturbQA, a benchmark for structured reasoning over perturbation experiments. Unlike current benchmarks that primarily interrogate existing knowledge, PerturbQA is inspired by open problems in perturbation modeling: prediction of differential expression and change of direction for unseen perturbations, and gene set enrichment. We evaluate state-of-the-art machine learning and statistical approaches for modeling perturbations, as well as standard LLM reasoning strategies, and we find that current methods perform poorly on PerturbQA. As a proof of feasibility, we introduce Summer (SUMMarize, retrievE, and answeR, a simple, domain-informed LLM framework that matches or exceeds the current state-of-the-art. Our code and data are publicly available at https://github.com/genentech/PerturbQA.