🤖 AI Summary
Existing language models struggle to perform quantitative, faithful comparative reasoning over multiple candidate answers grounded in scientific literature evidence—particularly critical in high-stakes applications such as drug target identification.
Method: We propose R2E, a retrieval-driven, explainable prediction framework tailored for this task. R2E introduces a novel “answer masking + evidence representation” architecture enabling Shapley value–driven, evidence-level attribution; it dynamically integrates multimodal new evidence without retraining. The method synergistically combines retrieval-augmented generation (RAG), evidence-centric feature modeling, and natural language–templated multimodal fusion.
Results: Experiments demonstrate that R2E matches the clinical efficacy prediction performance of black-box literature-based models while substantially outperforming mainstream genetics-based approaches. Crucially, it achieves end-to-end interpretability without sacrificing accuracy, establishing a new paradigm for evidence-based scientific decision-making.
📝 Abstract
Language models hold incredible promise for enabling scientific discovery by synthesizing massive research corpora. Many complex scientific research questions have multiple plausible answers, each supported by evidence of varying strength. However, existing language models lack the capability to quantitatively and faithfully compare answer plausibility in terms of supporting evidence. To address this, we introduce Retrieve to Explain (R2E), a retrieval-based model that scores and ranks all possible answers to a research question based on evidence retrieved from a document corpus. The architecture represents each answer only in terms of its supporting evidence, with the answer itself masked. This allows us to extend feature attribution methods such as Shapley values, to transparently attribute answer scores to supporting evidence at inference time. The architecture also allows incorporation of new evidence without retraining, including non-textual data modalities templated into natural language. We developed R2E for the challenging scientific discovery task of drug target identification, a human-in-the-loop process where failures are extremely costly and explainability paramount. When predicting whether drug targets will subsequently be confirmed as efficacious in clinical trials, R2E not only matches non-explainable literature-based models but also surpasses a genetics-based target identification approach used throughout the pharmaceutical industry.