Generalists vs. Specialists: Evaluating LLMs on Highly-Constrained Biophysical Sequence Optimization Tasks

📅 2024-10-29
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically evaluates the performance limits of general-purpose large language models (LLMs) versus domain-specific solvers (e.g., LaMBO-2) on highly constrained biophysical sequence optimization tasks—emphasizing exact constraint satisfaction and computational efficiency. To this end, we introduce the first Ehrlich function synthesis benchmark and propose LLOME, a two-level black-box optimization framework integrating preference learning and marginal expectation estimation, which drives LLM-based optimization solely via prompt engineering and human preference feedback. Experiments show that LLOME achieves the first LLM-based performance surpassing LaMBO-2 on medium-difficulty Ehrlich tasks; reveals a fundamental reward–likelihood calibration bottleneck in LLMs under strict constraints; and delineates the complementary boundaries between general-purpose models and specialized solvers. Our results establish a scalable, low-barrier paradigm for leveraging LLMs in scientific discovery.

Technology Category

Application Category

📝 Abstract
Although large language models (LLMs) have shown promise in biomolecule optimization problems, they incur heavy computational costs and struggle to satisfy precise constraints. On the other hand, specialized solvers like LaMBO-2 offer efficiency and fine-grained control but require more domain expertise. Comparing these approaches is challenging due to expensive laboratory validation and inadequate synthetic benchmarks. We address this by introducing Ehrlich functions, a synthetic test suite that captures the geometric structure of biophysical sequence optimization problems. With prompting alone, off-the-shelf LLMs struggle to optimize Ehrlich functions. In response, we propose LLOME (Language Model Optimization with Margin Expectation), a bilevel optimization routine for online black-box optimization. When combined with a novel preference learning loss, we find LLOME can not only learn to solve some Ehrlich functions, but can even outperform LaMBO-2 on moderately difficult Ehrlich variants. However, LLOME is comparable to LaMBO-2 on very easy or difficult variants, exhibits some likelihood-reward miscalibration, and struggles without explicit rewards. Our results indicate LLMs can provide significant benefits in some cases, but specialized solvers are still competitive and incur less overhead.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs vs specialized solvers for biophysical sequence optimization
Addressing high computational costs and constraint satisfaction in LLMs
Introducing synthetic benchmarks to compare optimization approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introducing Ehrlich functions for synthetic benchmarks
Proposing LLOME for bilevel optimization routine
Combining preference learning loss for optimization
🔎 Similar Papers
No similar papers found.