๐ค AI Summary
To address the high cost of high-throughput experiments in drug discovery, this paper introduces โinference set designโโa novel paradigm that prioritizes reducing the prediction difficulty of the remaining inference set over maximizing model accuracy, by actively selecting the most challenging-to-predict samples for experimental labeling. The method integrates confidence-based active learning, sequence subset selection, uncertainty modeling, and a provably convergent adaptive stopping mechanism. Its core innovation lies in redefining the objective of active learning from model optimization to explicit control of the difficulty distribution over the inference set, coupled with a theoretically grounded, explicit termination criterion. Experiments on image, molecular, and large-scale real-world biological assay datasets demonstrate up to 60% reduction in experimental cost while maintaining โฅ98% overall prediction accuracy.
๐ Abstract
In drug discovery, highly automated high-throughput laboratories are used to screen a large number of compounds in search of effective drugs. These experiments are expensive, so one might hope to reduce their cost by only experimenting on a subset of the compounds, and predicting the outcomes of the remaining experiments. In this work, we model this scenario as a sequential subset selection problem: we aim to select the smallest set of candidates in order to achieve some desired level of accuracy for the system as a whole. Our key observation is that, if there is heterogeneity in the difficulty of the prediction problem across the input space, selectively obtaining the labels for the hardest examples in the acquisition pool will leave only the relatively easy examples to remain in the inference set, leading to better overall system performance. We call this mechanism inference set design, and propose the use of a confidence-based active learning solution to prune out these challenging examples. Our algorithm includes an explicit stopping criterion that interrupts the acquisition loop when it is sufficiently confident that the system has reached the target performance. Our empirical studies on image and molecular datasets, as well as a real-world large-scale biological assay, show that active learning for inference set design leads to significant reduction in experimental cost while retaining high system performance.