🤖 AI Summary
In precision medicine, individualized treatment must balance efficacy and safety; however, clinical data often fail to identify treatment risks—such as adverse effects from ineffective interventions—rendering conventional methods inadequate under partial identification, where benefits are unobservable and only some risks are identifiable. This paper proposes the first falsifiable finite-sample risk control framework, integrating robust optimization, partial identification theory, and conservative estimation to develop a confidence upper-bound–driven algorithm grounded in sensitivity models. Theoretically, it delivers statistically guaranteed risk constraints with rigorous error bounds. Empirically, on both synthetic and real-world clinical datasets, the method significantly reduces exposure to ineffective treatments while preserving therapeutic benefit. All theoretical guarantees—including finite-sample risk coverage and estimation error bounds—are formally established.
📝 Abstract
Learning beneficial treatment allocations for a patient population is an important problem in precision medicine. Many treatments come with adverse side effects that are not commensurable with their potential benefits. Patients who do not receive benefits after such treatments are thereby subjected to unnecessary harm. This is a `treatment risk' that we aim to control when learning beneficial allocations. The constrained learning problem is challenged by the fact that the treatment risk is not in general identifiable using either randomized trial or observational data. We propose a certifiable learning method that controls the treatment risk with finite samples in the partially identified setting. The method is illustrated using both simulated and real data.