Budget-Sensitive Discovery Scoring: A Formally Verified Framework for Evaluating AI-Guided Scientific Selection

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of fairly evaluating AI-driven candidate selection strategies in scientific discovery under budget constraints and asymmetric error costs. To this end, the authors propose two novel metrics: Budget-Sensitive Discovery Score (BSDS) and Discovery Quality Score (DQS), which jointly account for false discovery rate and excessive abstention while mitigating strategy overfitting under fixed budgets. The evaluation framework integrates Lean 4 formal verification, false discovery rate control, coverage gap analysis, bootstrap resampling, and SMILES molecular representations. Systematic experiments on benchmarks such as MoleculeNet demonstrate that conventional random forest models outperform all tested large language model (LLM) configurations on HIV and Tox21 tasks. The proposed framework exhibits robust generalization across five benchmark categories, non-pharmaceutical safety scenarios, and diverse parameter settings.

Technology Category

Application Category

📝 Abstract
Scientific discovery increasingly relies on AI systems to select candidates for expensive experimental validation, yet no principled, budget-aware evaluation framework exists for comparing selection strategies -- a gap intensified by large language models (LLMs), which generate plausible scientific proposals without reliable downstream evaluation. We introduce the Budget-Sensitive Discovery Score (BSDS), a formally verified metric -- 20 theorems machine-checked by the Lean 4 proof assistant -- that jointly penalizes false discoveries (lambda-weighted FDR) and excessive abstention (gamma-weighted coverage gap) at each budget level. Its budget-averaged form, the Discovery Quality Score (DQS), provides a single summary statistic that no proposer can inflate by performing well at a cherry-picked budget. As a case study, we apply BSDS/DQS to: do LLMs add marginal value to an existing ML pipeline for drug discovery candidate selection? We evaluate 39 proposers -- 11 mechanistic variants, 14 zero-shot LLM configurations, and 14 few-shot LLM configurations -- using SMILES representations on MoleculeNet HIV (41,127 compounds, 3.5% active, 1,000 bootstrap replicates) under both random and scaffold splits. Three findings emerge. First, the simple RF-based Greedy-ML proposer achieves the best DQS (-0.046), outperforming all MLP variants and LLM configurations. Second, no LLM surpasses the Greedy-ML baseline under zero-shot or few-shot evaluation on HIV or Tox21, establishing that LLMs provide no marginal value over an existing trained classifier. Third, the proposer hierarchy generalizes across five MoleculeNet benchmarks spanning 0.18%-46.2% prevalence, a non-drug AV safety domain, and a 9x7 grid of penalty parameters (tau >= 0.636, mean tau = 0.863). The framework applies to any setting where candidates are selected under budget constraints and asymmetric error costs.
Problem

Research questions and friction points this paper is trying to address.

budget-sensitive evaluation
AI-guided scientific discovery
candidate selection
false discovery rate
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Budget-Sensitive Discovery Score
Formal Verification
AI-Guided Scientific Discovery
Large Language Models
Discovery Quality Score
A
Abhinaba Basu
Indian Institute of Information Technology Allahabad (IIITA) and National Institute of Electronics & Information Technology (NIELIT)
Pavan Chakraborty
Pavan Chakraborty
Indian Institute of Information Technology Allahabad
Artificial IntelligenceRobotics & Instrumentation