Retrieve to Explain: Evidence-driven Predictions for Explainable Drug Target Identification

📅 2024-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing language models struggle to perform quantitative, faithful comparative reasoning over multiple candidate answers grounded in scientific literature evidence—particularly critical in high-stakes applications such as drug target identification. Method: We propose R2E, a retrieval-driven, explainable prediction framework tailored for this task. R2E introduces a novel “answer masking + evidence representation” architecture enabling Shapley value–driven, evidence-level attribution; it dynamically integrates multimodal new evidence without retraining. The method synergistically combines retrieval-augmented generation (RAG), evidence-centric feature modeling, and natural language–templated multimodal fusion. Results: Experiments demonstrate that R2E matches the clinical efficacy prediction performance of black-box literature-based models while substantially outperforming mainstream genetics-based approaches. Crucially, it achieves end-to-end interpretability without sacrificing accuracy, establishing a new paradigm for evidence-based scientific decision-making.

Technology Category

Application Category

📝 Abstract
Language models hold incredible promise for enabling scientific discovery by synthesizing massive research corpora. Many complex scientific research questions have multiple plausible answers, each supported by evidence of varying strength. However, existing language models lack the capability to quantitatively and faithfully compare answer plausibility in terms of supporting evidence. To address this, we introduce Retrieve to Explain (R2E), a retrieval-based model that scores and ranks all possible answers to a research question based on evidence retrieved from a document corpus. The architecture represents each answer only in terms of its supporting evidence, with the answer itself masked. This allows us to extend feature attribution methods such as Shapley values, to transparently attribute answer scores to supporting evidence at inference time. The architecture also allows incorporation of new evidence without retraining, including non-textual data modalities templated into natural language. We developed R2E for the challenging scientific discovery task of drug target identification, a human-in-the-loop process where failures are extremely costly and explainability paramount. When predicting whether drug targets will subsequently be confirmed as efficacious in clinical trials, R2E not only matches non-explainable literature-based models but also surpasses a genetics-based target identification approach used throughout the pharmaceutical industry.
Problem

Research questions and friction points this paper is trying to address.

Quantitatively comparing answer plausibility using supporting evidence
Enhancing explainability in drug target identification predictions
Incorporating new evidence without retraining the model
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieval-based model ranks answers by evidence
Masks answers to attribute scores transparently
Incorporates new evidence without retraining
🔎 Similar Papers
No similar papers found.
R
Ravi Patel
BenevolentAI, London, United Kingdom
A
Angus Brayne
R
Rogier G. Hintzen
D
Daniel Jaroslawicz
G
Georgiana Neculae
D
Dane S. Corneil
BenevolentAI, London, United Kingdom