Knowledgeable Language Models as Black-Box Optimizers for Personalized Medicine

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor generalization of treatment recommendation systems in personalized medicine—particularly for unseen patient-treatment combinations—this paper proposes LEON, a novel framework that leverages large language models (LLMs) as zero-shot, black-box optimizers. LEON integrates structured (e.g., biomedical knowledge graphs) and unstructured (e.g., medical textbooks) domain priors into an entropy-guided, domain-aware search mechanism. Adopting a prompting-based optimization paradigm (“optimization by prompting”), it enables efficient and interpretable treatment generation under challenging out-of-distribution simulation settings. Extensive experiments on real-world clinical tasks demonstrate that LEON significantly outperforms conventional surrogate models and state-of-the-art LLM-based baselines, achieving superior recommendation efficacy while maintaining strong cross-patient generalizability.

Technology Category

Application Category

📝 Abstract
The goal of personalized medicine is to discover a treatment regimen that optimizes a patient's clinical outcome based on their personal genetic and environmental factors. However, candidate treatments cannot be arbitrarily administered to the patient to assess their efficacy; we often instead have access to an in silico surrogate model that approximates the true fitness of a proposed treatment. Unfortunately, such surrogate models have been shown to fail to generalize to previously unseen patient-treatment combinations. We hypothesize that domain-specific prior knowledge - such as medical textbooks and biomedical knowledge graphs - can provide a meaningful alternative signal of the fitness of proposed treatments. To this end, we introduce LLM-based Entropy-guided Optimization with kNowledgeable priors (LEON), a mathematically principled approach to leverage large language models (LLMs) as black-box optimizers without any task-specific fine-tuning, taking advantage of their ability to contextualize unstructured domain knowledge to propose personalized treatment plans in natural language. In practice, we implement LEON via 'optimization by prompting,' which uses LLMs as stochastic engines for proposing treatment designs. Experiments on real-world optimization tasks show LEON outperforms both traditional and LLM-based methods in proposing individualized treatments for patients.
Problem

Research questions and friction points this paper is trying to address.

Optimizing personalized treatment regimens using patient genetic factors
Addressing surrogate model failures on unseen patient-treatment combinations
Leveraging medical knowledge without task-specific model fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages LLMs as black-box optimizers without fine-tuning
Uses optimization by prompting for treatment proposal
Integrates domain knowledge from medical sources
🔎 Similar Papers